{"title":"基于深度学习技术的数据驱动动作捕捉的虚拟角色动画","authors":"G. Rajendran, Ojus Thomas Lee","doi":"10.1109/ICIIS51140.2020.9342693","DOIUrl":null,"url":null,"abstract":"Perceptions in motion capture (mocap) technology are increasing every day as the variety of applications using it is doubling. By leveraging the resources offered by mocap technology, human activity characteristics are captured and can be used as the source for animation. The devices involved in the technology are therefore very costly and hence not practical for personal use. In this scenario, we implement a framework capable of producing mocap data from standard RGB video and use it to animate a character in 3D space, based on the action of person in the original video with the help of deep learning techniques. The Human Mesh Recovery (HMR) scheme is used to extract mocap data from the input video to determine where joints of the person in the input video are located in 3D space, using 2D pose estimation. The locations of 3D joints are used as mocap data and transferred to Blender with a simple 3D character using which the character can be animated. A subjective evaluation of our framework based on the metric called observation factor was performed and yielded an accuracy value of 73.5%.","PeriodicalId":352858,"journal":{"name":"2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Virtual Character Animation based on Data-driven Motion Capture using Deep Learning Technique\",\"authors\":\"G. Rajendran, Ojus Thomas Lee\",\"doi\":\"10.1109/ICIIS51140.2020.9342693\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Perceptions in motion capture (mocap) technology are increasing every day as the variety of applications using it is doubling. By leveraging the resources offered by mocap technology, human activity characteristics are captured and can be used as the source for animation. The devices involved in the technology are therefore very costly and hence not practical for personal use. In this scenario, we implement a framework capable of producing mocap data from standard RGB video and use it to animate a character in 3D space, based on the action of person in the original video with the help of deep learning techniques. The Human Mesh Recovery (HMR) scheme is used to extract mocap data from the input video to determine where joints of the person in the input video are located in 3D space, using 2D pose estimation. The locations of 3D joints are used as mocap data and transferred to Blender with a simple 3D character using which the character can be animated. A subjective evaluation of our framework based on the metric called observation factor was performed and yielded an accuracy value of 73.5%.\",\"PeriodicalId\":352858,\"journal\":{\"name\":\"2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIIS51140.2020.9342693\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIIS51140.2020.9342693","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Virtual Character Animation based on Data-driven Motion Capture using Deep Learning Technique
Perceptions in motion capture (mocap) technology are increasing every day as the variety of applications using it is doubling. By leveraging the resources offered by mocap technology, human activity characteristics are captured and can be used as the source for animation. The devices involved in the technology are therefore very costly and hence not practical for personal use. In this scenario, we implement a framework capable of producing mocap data from standard RGB video and use it to animate a character in 3D space, based on the action of person in the original video with the help of deep learning techniques. The Human Mesh Recovery (HMR) scheme is used to extract mocap data from the input video to determine where joints of the person in the input video are located in 3D space, using 2D pose estimation. The locations of 3D joints are used as mocap data and transferred to Blender with a simple 3D character using which the character can be animated. A subjective evaluation of our framework based on the metric called observation factor was performed and yielded an accuracy value of 73.5%.