{"title":"基于非语言交流的在线访谈情绪检测","authors":"V.A.S.A. Ranasinghe, Dilruk Ranasinghe","doi":"10.1109/SLAAI-ICAI56923.2022.10002669","DOIUrl":null,"url":null,"abstract":"Online interviews has become the norm at present due to the requirement of maintaining social distance as well as an efficient means for avoiding other prevailing limitations. In online interviews the candidate faces the interview panel virtually, limiting the opportunity of the panel to observe facial expressions, body language and other soft skills of the candidate. Analyzing the required soft skills and attitudes for a particular job is a good parameter in the long term to judge the suitability of a candidate to hold on for the job. Yet, selecting the most suitable candidate who is good ‘on paper’ as well as good ‘in person’ is very challenging. This research proposes a method of identifying emotions of candidates based on a captured video sequence of the candidate during the interview. Finally the developed model will be able to calculate the most prevalent emotion of the candidate during the interview. Thus, it is expected that fine-grained speaker-specific continuous emotion recognition system developed in this research will help online interview panels to select the most suitable candidate by giving extra information about the candidates in online interviews. The system consists of four main modules for image preprocessing, feature extraction, identification of feature occurrences and intensities and classification of emotions. The system is capable of classifying the emotions with 68% accuracy using the Mini_Xception method. The model can be further improved by having a uniform training data set, which is a challenge. As future work it was identified to develop a prediction model for the suitability of each candidate for the advertised post.","PeriodicalId":308901,"journal":{"name":"2022 6th SLAAI International Conference on Artificial Intelligence (SLAAI-ICAI)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Non-Verbal Communication Based Emotion Detection in Online Interviews\",\"authors\":\"V.A.S.A. Ranasinghe, Dilruk Ranasinghe\",\"doi\":\"10.1109/SLAAI-ICAI56923.2022.10002669\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Online interviews has become the norm at present due to the requirement of maintaining social distance as well as an efficient means for avoiding other prevailing limitations. In online interviews the candidate faces the interview panel virtually, limiting the opportunity of the panel to observe facial expressions, body language and other soft skills of the candidate. Analyzing the required soft skills and attitudes for a particular job is a good parameter in the long term to judge the suitability of a candidate to hold on for the job. Yet, selecting the most suitable candidate who is good ‘on paper’ as well as good ‘in person’ is very challenging. This research proposes a method of identifying emotions of candidates based on a captured video sequence of the candidate during the interview. Finally the developed model will be able to calculate the most prevalent emotion of the candidate during the interview. Thus, it is expected that fine-grained speaker-specific continuous emotion recognition system developed in this research will help online interview panels to select the most suitable candidate by giving extra information about the candidates in online interviews. The system consists of four main modules for image preprocessing, feature extraction, identification of feature occurrences and intensities and classification of emotions. The system is capable of classifying the emotions with 68% accuracy using the Mini_Xception method. The model can be further improved by having a uniform training data set, which is a challenge. As future work it was identified to develop a prediction model for the suitability of each candidate for the advertised post.\",\"PeriodicalId\":308901,\"journal\":{\"name\":\"2022 6th SLAAI International Conference on Artificial Intelligence (SLAAI-ICAI)\",\"volume\":\"148 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 6th SLAAI International Conference on Artificial Intelligence (SLAAI-ICAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SLAAI-ICAI56923.2022.10002669\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 6th SLAAI International Conference on Artificial Intelligence (SLAAI-ICAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLAAI-ICAI56923.2022.10002669","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Non-Verbal Communication Based Emotion Detection in Online Interviews
Online interviews has become the norm at present due to the requirement of maintaining social distance as well as an efficient means for avoiding other prevailing limitations. In online interviews the candidate faces the interview panel virtually, limiting the opportunity of the panel to observe facial expressions, body language and other soft skills of the candidate. Analyzing the required soft skills and attitudes for a particular job is a good parameter in the long term to judge the suitability of a candidate to hold on for the job. Yet, selecting the most suitable candidate who is good ‘on paper’ as well as good ‘in person’ is very challenging. This research proposes a method of identifying emotions of candidates based on a captured video sequence of the candidate during the interview. Finally the developed model will be able to calculate the most prevalent emotion of the candidate during the interview. Thus, it is expected that fine-grained speaker-specific continuous emotion recognition system developed in this research will help online interview panels to select the most suitable candidate by giving extra information about the candidates in online interviews. The system consists of four main modules for image preprocessing, feature extraction, identification of feature occurrences and intensities and classification of emotions. The system is capable of classifying the emotions with 68% accuracy using the Mini_Xception method. The model can be further improved by having a uniform training data set, which is a challenge. As future work it was identified to develop a prediction model for the suitability of each candidate for the advertised post.