Ulrik Söderström, Songyu Li, Harry L. Claxton, Daisy C. Holmes, Thomas T. Ranji, Carlos P. Santos, Carina E. I. Westling, H. Witchel
{"title":"Toward emotional recognition during HCI using marker-based automated video tracking","authors":"Ulrik Söderström, Songyu Li, Harry L. Claxton, Daisy C. Holmes, Thomas T. Ranji, Carlos P. Santos, Carina E. I. Westling, H. Witchel","doi":"10.1145/3335082.3335103","DOIUrl":null,"url":null,"abstract":"Postural movement of a seated person, as determined by lateral aspect video analysis, can be used to estimate learning-relevant emotions. In this article the motion of a person interacting with a computer is automatically extracted from a video by detecting the position of motion-tracking markers on the person’s body. The detection is done by detecting candidate areas for marker with a Convolutional Neural Network and the correct candidate areas are found by template matching. Several markers are detected in more than 99 % of the video frames while one is detected in only ≈ 80,2 % of the frames. The template matching can also detect the correct template in ≈ 80 of the frames. This means that almost always when the correct candidates are extracted, the template matching is successful. Suggestions for how the performance can be improved are given along with possible use of the marker positions for estimating sagittal plane motion.","PeriodicalId":279162,"journal":{"name":"Proceedings of the 31st European Conference on Cognitive Ergonomics","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 31st European Conference on Cognitive Ergonomics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3335082.3335103","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Postural movement of a seated person, as determined by lateral aspect video analysis, can be used to estimate learning-relevant emotions. In this article the motion of a person interacting with a computer is automatically extracted from a video by detecting the position of motion-tracking markers on the person’s body. The detection is done by detecting candidate areas for marker with a Convolutional Neural Network and the correct candidate areas are found by template matching. Several markers are detected in more than 99 % of the video frames while one is detected in only ≈ 80,2 % of the frames. The template matching can also detect the correct template in ≈ 80 of the frames. This means that almost always when the correct candidates are extracted, the template matching is successful. Suggestions for how the performance can be improved are given along with possible use of the marker positions for estimating sagittal plane motion.