L. Chen, Su-Youn Yoon, C. W. Leong, Michelle P. Martín‐Raugh, Min Ma
{"title":"An Initial Analysis of Structured Video Interviews by Using Multimodal Emotion Detection","authors":"L. Chen, Su-Youn Yoon, C. W. Leong, Michelle P. Martín‐Raugh, Min Ma","doi":"10.1145/2668056.2668057","DOIUrl":null,"url":null,"abstract":"Recently online video interviews have been increasingly used in the employment process. Though several automatic techniques have emerged to analyze the interview videos, so far, only simple emotion analyses have been attempted, e.g. counting the number of smiles on the face of an interviewee. In this paper, we report our initial study of employing advanced multimodal emotion detection approaches for the purpose of measuring performance on an interview task that elicits emotion. On an acted interview corpus we created, we performed our evaluations using a Speech-based Emotion Recognition (SER) system, as well as an off-the-shelf facial expression analysis toolkit (FACET). While the results obtained suggest the promise of using FACET for emotion detection, the benefits of employing the SER are somewhat limited.","PeriodicalId":408721,"journal":{"name":"ERM4HCI '14","volume":"150 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ERM4HCI '14","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2668056.2668057","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
Recently online video interviews have been increasingly used in the employment process. Though several automatic techniques have emerged to analyze the interview videos, so far, only simple emotion analyses have been attempted, e.g. counting the number of smiles on the face of an interviewee. In this paper, we report our initial study of employing advanced multimodal emotion detection approaches for the purpose of measuring performance on an interview task that elicits emotion. On an acted interview corpus we created, we performed our evaluations using a Speech-based Emotion Recognition (SER) system, as well as an off-the-shelf facial expression analysis toolkit (FACET). While the results obtained suggest the promise of using FACET for emotion detection, the benefits of employing the SER are somewhat limited.