{"title":"Affective human pose classification from optical motion capture","authors":"Muhtadin, S. Sumpeno, Aang Pamuji Dyaksa","doi":"10.1109/ISITIA.2017.8124095","DOIUrl":null,"url":null,"abstract":"In the animation movie production, there is a common tool namely motion capture (mocap) to capture the motion of actors. Using this technology, reconstruction of actors motion is being mapped to drive 3D character in the animation. In the reconstruction process of human motion, there were some significant parameters that affect the quality of the result such as subtle motion and high precision reconstruction. In order to get the best result, it requires some configurations such as camera disposition, camera configuration, and marker arrangement that should be placed in the proper position. Furthermore, after the capturing process, the result needs to be repaired due to misplaced or unable to define some markers. The result of this research is a Human Motion Database (HMDB), consist of poses which express basic emotion based on database from The Bodily Expressive Action Stimulus Test (BEAST). Basic emotions are anger, fear, happiness and sadness. The database result evaluated by conducting classification and validation of the data affective poses. Pose data is represented by rotation value of each joint in the skeleton. This value classified using machine learning to predict each pose to emotion classes. Classification result of the affective pose has the highest accuration score was fear class. Respectively the accuracy of class fear, anger, happiness and sadness are 96.87%, 95.62%, 94.37%, and 94.37%/","PeriodicalId":308504,"journal":{"name":"2017 International Seminar on Intelligent Technology and Its Applications (ISITIA)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Seminar on Intelligent Technology and Its Applications (ISITIA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISITIA.2017.8124095","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
In the animation movie production, there is a common tool namely motion capture (mocap) to capture the motion of actors. Using this technology, reconstruction of actors motion is being mapped to drive 3D character in the animation. In the reconstruction process of human motion, there were some significant parameters that affect the quality of the result such as subtle motion and high precision reconstruction. In order to get the best result, it requires some configurations such as camera disposition, camera configuration, and marker arrangement that should be placed in the proper position. Furthermore, after the capturing process, the result needs to be repaired due to misplaced or unable to define some markers. The result of this research is a Human Motion Database (HMDB), consist of poses which express basic emotion based on database from The Bodily Expressive Action Stimulus Test (BEAST). Basic emotions are anger, fear, happiness and sadness. The database result evaluated by conducting classification and validation of the data affective poses. Pose data is represented by rotation value of each joint in the skeleton. This value classified using machine learning to predict each pose to emotion classes. Classification result of the affective pose has the highest accuration score was fear class. Respectively the accuracy of class fear, anger, happiness and sadness are 96.87%, 95.62%, 94.37%, and 94.37%/