{"title":"基于人形模型的仿人视频学习","authors":"Chun Hei Lee, Nicole Chee Lin Yueh, K. Woo","doi":"10.1109/IRC55401.2022.00068","DOIUrl":null,"url":null,"abstract":"Generating good and human-like locomotion or other legged motions for bipedal robots has always been challenging. One of the emerging solutions to this challenge is to use imitation learning. The sources for imitation are mostly state-only demonstrations, so using state-of-the-art Generative Adversarial Imitation Learning (GAIL) with Imitation from Observation (IfO) ability will be an ideal frameworks to use in solving this problem. However, it is often difficult to allow new or complicated movements as the common sources for these frameworks are either expensive to set up or hard to produce satisfactory results without computationally expensive preprocessing, due to accuracy problems. Inspired by how people learn advanced knowledge after acquiring basic understandings of specific subjects, this paper proposes a Motion capture-aided Video Imitation (MoVI) learning framework based on Adversarial Motion Priors (AMP) by combining motion capture data of primary actions like walking with video clips of target motion like running, aiming to create smooth and natural imitation results of the target motion. This framework is able to produce various human-like locomotion by taking the most common and abundant motion capture data with any video clips of motion without the need for expensive datasets or sophisticated preprocessing.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Human-inspired Video Imitation Learning on Humanoid Model\",\"authors\":\"Chun Hei Lee, Nicole Chee Lin Yueh, K. Woo\",\"doi\":\"10.1109/IRC55401.2022.00068\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Generating good and human-like locomotion or other legged motions for bipedal robots has always been challenging. One of the emerging solutions to this challenge is to use imitation learning. The sources for imitation are mostly state-only demonstrations, so using state-of-the-art Generative Adversarial Imitation Learning (GAIL) with Imitation from Observation (IfO) ability will be an ideal frameworks to use in solving this problem. However, it is often difficult to allow new or complicated movements as the common sources for these frameworks are either expensive to set up or hard to produce satisfactory results without computationally expensive preprocessing, due to accuracy problems. Inspired by how people learn advanced knowledge after acquiring basic understandings of specific subjects, this paper proposes a Motion capture-aided Video Imitation (MoVI) learning framework based on Adversarial Motion Priors (AMP) by combining motion capture data of primary actions like walking with video clips of target motion like running, aiming to create smooth and natural imitation results of the target motion. This framework is able to produce various human-like locomotion by taking the most common and abundant motion capture data with any video clips of motion without the need for expensive datasets or sophisticated preprocessing.\",\"PeriodicalId\":282759,\"journal\":{\"name\":\"2022 Sixth IEEE International Conference on Robotic Computing (IRC)\",\"volume\":\"91 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 Sixth IEEE International Conference on Robotic Computing (IRC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IRC55401.2022.00068\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IRC55401.2022.00068","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Human-inspired Video Imitation Learning on Humanoid Model
Generating good and human-like locomotion or other legged motions for bipedal robots has always been challenging. One of the emerging solutions to this challenge is to use imitation learning. The sources for imitation are mostly state-only demonstrations, so using state-of-the-art Generative Adversarial Imitation Learning (GAIL) with Imitation from Observation (IfO) ability will be an ideal frameworks to use in solving this problem. However, it is often difficult to allow new or complicated movements as the common sources for these frameworks are either expensive to set up or hard to produce satisfactory results without computationally expensive preprocessing, due to accuracy problems. Inspired by how people learn advanced knowledge after acquiring basic understandings of specific subjects, this paper proposes a Motion capture-aided Video Imitation (MoVI) learning framework based on Adversarial Motion Priors (AMP) by combining motion capture data of primary actions like walking with video clips of target motion like running, aiming to create smooth and natural imitation results of the target motion. This framework is able to produce various human-like locomotion by taking the most common and abundant motion capture data with any video clips of motion without the need for expensive datasets or sophisticated preprocessing.