{"title":"Robot Plan Model Generation and Execution with Natural Language Interface*","authors":"Kyon-Mo Yang, Kap-Ho Seo, S. Kang, Yoonseob Lim","doi":"10.1109/ICRA40945.2020.9196987","DOIUrl":null,"url":null,"abstract":"Verbal interaction between a human and a robot may play a key role in conveying suitable directions for a robot to achieve the goal of a user’s request. However, a robot may need to correct task plans or make new decisions with human help, which would make the interaction inconvenient and also increase the interaction time. In this paper, we propose a new verbal interaction-based method that can generate plan models and execute proper actions without human involvement in the middle of performing a task by a robot. To understand the verbal behaviors of humans when giving instructions to a robot, we first conducted a brief user study and found that a human user does not explicitly express the required task. To handle such unclear instructions by a human, we propose two different algorithms that can generate a component of new plan models based on intents and entities parsed from natural language and can resolve the unclear entities existed in human instructions. An experimental scenario with a robot, Cozmo, was tried in the lab environment to test whether or not the proposed method could generate an appropriate plan model. As a result, we found that the robot could successfully accomplish the task following human instructions and also found that the number of interactions and components in the plan model could be reduced as opposed to the general reactive plan model. In the future, we are going to improve the automated process of generating plan models and apply various scenarios under different service environments and robots.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":"22 1","pages":"6973-6978"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA40945.2020.9196987","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Verbal interaction between a human and a robot may play a key role in conveying suitable directions for a robot to achieve the goal of a user’s request. However, a robot may need to correct task plans or make new decisions with human help, which would make the interaction inconvenient and also increase the interaction time. In this paper, we propose a new verbal interaction-based method that can generate plan models and execute proper actions without human involvement in the middle of performing a task by a robot. To understand the verbal behaviors of humans when giving instructions to a robot, we first conducted a brief user study and found that a human user does not explicitly express the required task. To handle such unclear instructions by a human, we propose two different algorithms that can generate a component of new plan models based on intents and entities parsed from natural language and can resolve the unclear entities existed in human instructions. An experimental scenario with a robot, Cozmo, was tried in the lab environment to test whether or not the proposed method could generate an appropriate plan model. As a result, we found that the robot could successfully accomplish the task following human instructions and also found that the number of interactions and components in the plan model could be reduced as opposed to the general reactive plan model. In the future, we are going to improve the automated process of generating plan models and apply various scenarios under different service environments and robots.