R. T. Peralta, Tasneem Kaochar, Ian R. Fasel, C. Morrison, Thomas J. Walsh, P. Cohen
{"title":"Challenges to decoding the intention behind natural instruction","authors":"R. T. Peralta, Tasneem Kaochar, Ian R. Fasel, C. Morrison, Thomas J. Walsh, P. Cohen","doi":"10.1109/ROMAN.2011.6005273","DOIUrl":null,"url":null,"abstract":"Currently, most systems for human-robot teaching allow only one mode of teacher-student interaction (e.g., teaching by demonstration or feedback), and teaching episodes have to be carefully set-up by an expert. To understand how we might integrate multiple, interleaved forms of human instruction into a robot learner, we performed a behavioral study in which 44 untrained humans were allowed to freely mix interaction modes to teach a simulated robot (secretly controlled by a human) a complex task. Analysis of transcripts showed that human teachers often give instructions that are nontrivial to interpret and not easily translated into a form useable by machine learning algorithms. In particular, humans often use implicit instructions, fail to clearly indicate the boundaries of procedures, and tightly interleave testing, feedback, and new instruction. In this paper, we detail these teaching patterns and discuss the challenges they pose to automatic teaching interpretation as well as the machine-learning algorithms that must ultimately process these instructions. We highlight the challenges by demonstrating the difficulties of an initial automatic teacher interpretation system.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"101 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 RO-MAN","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROMAN.2011.6005273","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Currently, most systems for human-robot teaching allow only one mode of teacher-student interaction (e.g., teaching by demonstration or feedback), and teaching episodes have to be carefully set-up by an expert. To understand how we might integrate multiple, interleaved forms of human instruction into a robot learner, we performed a behavioral study in which 44 untrained humans were allowed to freely mix interaction modes to teach a simulated robot (secretly controlled by a human) a complex task. Analysis of transcripts showed that human teachers often give instructions that are nontrivial to interpret and not easily translated into a form useable by machine learning algorithms. In particular, humans often use implicit instructions, fail to clearly indicate the boundaries of procedures, and tightly interleave testing, feedback, and new instruction. In this paper, we detail these teaching patterns and discuss the challenges they pose to automatic teaching interpretation as well as the machine-learning algorithms that must ultimately process these instructions. We highlight the challenges by demonstrating the difficulties of an initial automatic teacher interpretation system.