Vanessa Echeverría, Allan Avendaño, K. Chiluiza, Aníbal Vásquez, X. Ochoa
{"title":"Presentation Skills Estimation Based on Video and Kinect Data Analysis","authors":"Vanessa Echeverría, Allan Avendaño, K. Chiluiza, Aníbal Vásquez, X. Ochoa","doi":"10.1145/2666633.2666641","DOIUrl":null,"url":null,"abstract":"This paper identifies, by means of video and Kinect data, a set of predictors that estimate the presentation skills of 448 individual students. Two evaluation criteria were predicted: eye contact and posture and body language. Machine-learning evaluations resulted in models that predicted the performance level (good or poor) of the presenters with 68% and 63% of correctly classified instances, for eye contact and postures and body language criteria, respectively. Furthermore, the results suggest that certain features, such as arms movement and smoothness, provide high significance on predicting the level of development for presentation skills. The paper finishes with conclusions and related ideas for future work.","PeriodicalId":123577,"journal":{"name":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"46","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2666633.2666641","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 46
Abstract
This paper identifies, by means of video and Kinect data, a set of predictors that estimate the presentation skills of 448 individual students. Two evaluation criteria were predicted: eye contact and posture and body language. Machine-learning evaluations resulted in models that predicted the performance level (good or poor) of the presenters with 68% and 63% of correctly classified instances, for eye contact and postures and body language criteria, respectively. Furthermore, the results suggest that certain features, such as arms movement and smoothness, provide high significance on predicting the level of development for presentation skills. The paper finishes with conclusions and related ideas for future work.