{"title":"Learning Sequential Human-Robot Interaction Tasks from Demonstrations: The Role of Temporal Reasoning","authors":"Estuardo Carpio, Madison Clark-Turner, M. Begum","doi":"10.1109/RO-MAN46459.2019.8956346","DOIUrl":null,"url":null,"abstract":"There are many human-robot interaction (HRI) tasks that are highly structured and follow a certain temporal sequence. Learning such tasks from demonstrations requires understanding the underlying rules governing the interactions. This involves identifying and generalizing the key spatial and temporal features of the task and capturing the high-level relationships among them. Despite its crucial role in sequential task learning, temporal reasoning is often ignored in existing learning from demonstration (LFD) research. This paper proposes a holistic LFD framework that learns the underlying temporal structure of sequential HRI tasks. The proposed Temporal-Reasoning-based LFD (TR-LFD) framework relies on an automated spatial reasoning layer to identify and generalize relevant spatial features, and a temporal reasoning layer to analyze and learn the high-level temporal structure of a HRI task. We evaluate the performance of this framework by learning a well-explored task in HRI research: robot-mediated autism intervention. The source code for this implementation is available at https://github.com/AssistiveRoboticsUNH/TR-LFD.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN46459.2019.8956346","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
There are many human-robot interaction (HRI) tasks that are highly structured and follow a certain temporal sequence. Learning such tasks from demonstrations requires understanding the underlying rules governing the interactions. This involves identifying and generalizing the key spatial and temporal features of the task and capturing the high-level relationships among them. Despite its crucial role in sequential task learning, temporal reasoning is often ignored in existing learning from demonstration (LFD) research. This paper proposes a holistic LFD framework that learns the underlying temporal structure of sequential HRI tasks. The proposed Temporal-Reasoning-based LFD (TR-LFD) framework relies on an automated spatial reasoning layer to identify and generalize relevant spatial features, and a temporal reasoning layer to analyze and learn the high-level temporal structure of a HRI task. We evaluate the performance of this framework by learning a well-explored task in HRI research: robot-mediated autism intervention. The source code for this implementation is available at https://github.com/AssistiveRoboticsUNH/TR-LFD.