基于非语言交流的在线访谈情绪检测

V.A.S.A. Ranasinghe, Dilruk Ranasinghe
{"title":"基于非语言交流的在线访谈情绪检测","authors":"V.A.S.A. Ranasinghe, Dilruk Ranasinghe","doi":"10.1109/SLAAI-ICAI56923.2022.10002669","DOIUrl":null,"url":null,"abstract":"Online interviews has become the norm at present due to the requirement of maintaining social distance as well as an efficient means for avoiding other prevailing limitations. In online interviews the candidate faces the interview panel virtually, limiting the opportunity of the panel to observe facial expressions, body language and other soft skills of the candidate. Analyzing the required soft skills and attitudes for a particular job is a good parameter in the long term to judge the suitability of a candidate to hold on for the job. Yet, selecting the most suitable candidate who is good ‘on paper’ as well as good ‘in person’ is very challenging. This research proposes a method of identifying emotions of candidates based on a captured video sequence of the candidate during the interview. Finally the developed model will be able to calculate the most prevalent emotion of the candidate during the interview. Thus, it is expected that fine-grained speaker-specific continuous emotion recognition system developed in this research will help online interview panels to select the most suitable candidate by giving extra information about the candidates in online interviews. The system consists of four main modules for image preprocessing, feature extraction, identification of feature occurrences and intensities and classification of emotions. The system is capable of classifying the emotions with 68% accuracy using the Mini_Xception method. The model can be further improved by having a uniform training data set, which is a challenge. As future work it was identified to develop a prediction model for the suitability of each candidate for the advertised post.","PeriodicalId":308901,"journal":{"name":"2022 6th SLAAI International Conference on Artificial Intelligence (SLAAI-ICAI)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Non-Verbal Communication Based Emotion Detection in Online Interviews\",\"authors\":\"V.A.S.A. Ranasinghe, Dilruk Ranasinghe\",\"doi\":\"10.1109/SLAAI-ICAI56923.2022.10002669\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Online interviews has become the norm at present due to the requirement of maintaining social distance as well as an efficient means for avoiding other prevailing limitations. In online interviews the candidate faces the interview panel virtually, limiting the opportunity of the panel to observe facial expressions, body language and other soft skills of the candidate. Analyzing the required soft skills and attitudes for a particular job is a good parameter in the long term to judge the suitability of a candidate to hold on for the job. Yet, selecting the most suitable candidate who is good ‘on paper’ as well as good ‘in person’ is very challenging. This research proposes a method of identifying emotions of candidates based on a captured video sequence of the candidate during the interview. Finally the developed model will be able to calculate the most prevalent emotion of the candidate during the interview. Thus, it is expected that fine-grained speaker-specific continuous emotion recognition system developed in this research will help online interview panels to select the most suitable candidate by giving extra information about the candidates in online interviews. The system consists of four main modules for image preprocessing, feature extraction, identification of feature occurrences and intensities and classification of emotions. The system is capable of classifying the emotions with 68% accuracy using the Mini_Xception method. The model can be further improved by having a uniform training data set, which is a challenge. As future work it was identified to develop a prediction model for the suitability of each candidate for the advertised post.\",\"PeriodicalId\":308901,\"journal\":{\"name\":\"2022 6th SLAAI International Conference on Artificial Intelligence (SLAAI-ICAI)\",\"volume\":\"148 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 6th SLAAI International Conference on Artificial Intelligence (SLAAI-ICAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SLAAI-ICAI56923.2022.10002669\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 6th SLAAI International Conference on Artificial Intelligence (SLAAI-ICAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLAAI-ICAI56923.2022.10002669","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于保持社交距离的要求以及避免其他普遍限制的有效手段,在线访谈目前已成为一种规范。在网络面试中,候选人面对的是虚拟的面试小组,这限制了面试小组观察候选人面部表情、肢体语言和其他软技能的机会。从长远来看,分析特定工作所需的软技能和态度是判断候选人是否适合这份工作的一个很好的参数。然而,选择最合适的“书面”和“本人”都很好的候选人是非常具有挑战性的。本研究提出了一种基于应聘者在面试过程中拍摄的视频序列来识别应聘者情绪的方法。最后,开发的模型将能够计算出候选人在面试中最普遍的情绪。因此,期望本研究开发的细粒度特定说话人连续情绪识别系统能够通过提供在线面试中候选人的额外信息,帮助在线面试小组选择最合适的候选人。该系统包括图像预处理、特征提取、特征发生和强度识别、情绪分类四个主要模块。使用Mini_Xception方法,系统能够以68%的准确率对情绪进行分类。该模型可以通过统一的训练数据集来进一步改进,这是一个挑战。作为今后的工作,已确定为每个候选人是否适合所公布的员额制订一个预测模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Non-Verbal Communication Based Emotion Detection in Online Interviews
Online interviews has become the norm at present due to the requirement of maintaining social distance as well as an efficient means for avoiding other prevailing limitations. In online interviews the candidate faces the interview panel virtually, limiting the opportunity of the panel to observe facial expressions, body language and other soft skills of the candidate. Analyzing the required soft skills and attitudes for a particular job is a good parameter in the long term to judge the suitability of a candidate to hold on for the job. Yet, selecting the most suitable candidate who is good ‘on paper’ as well as good ‘in person’ is very challenging. This research proposes a method of identifying emotions of candidates based on a captured video sequence of the candidate during the interview. Finally the developed model will be able to calculate the most prevalent emotion of the candidate during the interview. Thus, it is expected that fine-grained speaker-specific continuous emotion recognition system developed in this research will help online interview panels to select the most suitable candidate by giving extra information about the candidates in online interviews. The system consists of four main modules for image preprocessing, feature extraction, identification of feature occurrences and intensities and classification of emotions. The system is capable of classifying the emotions with 68% accuracy using the Mini_Xception method. The model can be further improved by having a uniform training data set, which is a challenge. As future work it was identified to develop a prediction model for the suitability of each candidate for the advertised post.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信