{"title":"Data Augmentation for Human Motion Prediction","authors":"Takahiro Maeda, N. Ukita","doi":"10.23919/MVA51890.2021.9511368","DOIUrl":null,"url":null,"abstract":"Human motion prediction is seldom deployed to real-world tasks due to difficulty in collecting a huge amount of motion data. We propose two motion data augmentation approaches using Variational AutoEn-coder (VAE) and Inverse Kinematics (IK). Our VAE-based generative model with adversarial training and sampling near samples generates various motions even with insufficient original motion data. Our IK-based augmentation scheme allows us to semi-automatically generate a variety of motions. Furthermore, we correct unrealistic artifacts in the augmented motions. As a result, our method outperforms previous noise-based motion augmentation methods.","PeriodicalId":312481,"journal":{"name":"2021 17th International Conference on Machine Vision and Applications (MVA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 17th International Conference on Machine Vision and Applications (MVA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/MVA51890.2021.9511368","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Human motion prediction is seldom deployed to real-world tasks due to difficulty in collecting a huge amount of motion data. We propose two motion data augmentation approaches using Variational AutoEn-coder (VAE) and Inverse Kinematics (IK). Our VAE-based generative model with adversarial training and sampling near samples generates various motions even with insufficient original motion data. Our IK-based augmentation scheme allows us to semi-automatically generate a variety of motions. Furthermore, we correct unrealistic artifacts in the augmented motions. As a result, our method outperforms previous noise-based motion augmentation methods.