Victor Paléologue, Jocelyn Martin, A. Pandey, Alexandre Coninx, M. Chetouani
{"title":"基于语义的机器人行为组合教学交互","authors":"Victor Paléologue, Jocelyn Martin, A. Pandey, Alexandre Coninx, M. Chetouani","doi":"10.1109/ROMAN.2017.8172279","DOIUrl":null,"url":null,"abstract":"Allowing humans to teach robot behaviors will facilitate acceptability as well as long-term interactions. Humans would mainly use speech to transfer knowledge or to teach highlevel behaviors. In this paper, we propose a proof-of-concept application allowing a Pepper robot to learn behaviors from their natural-language-based description, provided by naive human users. In our model, natural language input is provided by grammar-free speech recognition, and is then processed to produce semantic knowledge, grounded in language and primitive behaviors. The same semantic knowledge is used to represent any kind of perceived input as well as actions the robot can perform. The experiment shows that the system can work independently from the domain of application, but also that it has limitations. Progress in semantic extraction, behavior planning and interaction scenario could stretch these limits.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Semantic-based interaction for teaching robot behavior compositions\",\"authors\":\"Victor Paléologue, Jocelyn Martin, A. Pandey, Alexandre Coninx, M. Chetouani\",\"doi\":\"10.1109/ROMAN.2017.8172279\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Allowing humans to teach robot behaviors will facilitate acceptability as well as long-term interactions. Humans would mainly use speech to transfer knowledge or to teach highlevel behaviors. In this paper, we propose a proof-of-concept application allowing a Pepper robot to learn behaviors from their natural-language-based description, provided by naive human users. In our model, natural language input is provided by grammar-free speech recognition, and is then processed to produce semantic knowledge, grounded in language and primitive behaviors. The same semantic knowledge is used to represent any kind of perceived input as well as actions the robot can perform. The experiment shows that the system can work independently from the domain of application, but also that it has limitations. Progress in semantic extraction, behavior planning and interaction scenario could stretch these limits.\",\"PeriodicalId\":134777,\"journal\":{\"name\":\"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROMAN.2017.8172279\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROMAN.2017.8172279","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Semantic-based interaction for teaching robot behavior compositions
Allowing humans to teach robot behaviors will facilitate acceptability as well as long-term interactions. Humans would mainly use speech to transfer knowledge or to teach highlevel behaviors. In this paper, we propose a proof-of-concept application allowing a Pepper robot to learn behaviors from their natural-language-based description, provided by naive human users. In our model, natural language input is provided by grammar-free speech recognition, and is then processed to produce semantic knowledge, grounded in language and primitive behaviors. The same semantic knowledge is used to represent any kind of perceived input as well as actions the robot can perform. The experiment shows that the system can work independently from the domain of application, but also that it has limitations. Progress in semantic extraction, behavior planning and interaction scenario could stretch these limits.