S. Nambiar, Rahul Das, Sowmya Rasipuram, D. Jayagopi
{"title":"自动生成可操作的反馈,以提高工作面试中的社会能力","authors":"S. Nambiar, Rahul Das, Sowmya Rasipuram, D. Jayagopi","doi":"10.1145/3139513.3139515","DOIUrl":null,"url":null,"abstract":"Soft skill assessment is a vital aspect of a job interview process as these qualities are indicative of the candidates compatibility in the work environment, their negotiation skills, client interaction prowess and leadership flair among other factors. The rise in popularity of asynchronous video based job interviews has created the need for a scalable solution to gauge candidate performance and hence we turn to automation. In this research, we aim to build a system that automatically provides a summative feedback to candidates at the end of an interview. Most feedback system predicts values of social indicators and communication cues, leaving the interpretation open to the user. Our system directly predicts an actionable feedback that leaves the candidate with a tangible take away at the end of the interview. We approached placement trainers and made a list of most common feedback that is given during training and we attempt to predict them directly. Towards this front,we captured data from over 145 participants in an interview like environment. Designing intelligent training environments for job interview preparation using a video data corpus is a demanding task due to its complex correlations and multimodal interactions. We used several state-of-the-art machine learning algorithms with manual annotation as ground truth. The predictive models were built with a focus on nonverbal communication cues so as to reduce the task of addressing the challenges faced in spoken language understanding and task modelling. We extracted audio and lexical features and our findings indicate a stronger correlation to audio and prosodic features in candidate assessment.Our best results gave an accuracy of 95% when the baseline accuracy was 77%.","PeriodicalId":441030,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Automatic generation of actionable feedback towards improving social competency in job interviews\",\"authors\":\"S. Nambiar, Rahul Das, Sowmya Rasipuram, D. Jayagopi\",\"doi\":\"10.1145/3139513.3139515\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Soft skill assessment is a vital aspect of a job interview process as these qualities are indicative of the candidates compatibility in the work environment, their negotiation skills, client interaction prowess and leadership flair among other factors. The rise in popularity of asynchronous video based job interviews has created the need for a scalable solution to gauge candidate performance and hence we turn to automation. In this research, we aim to build a system that automatically provides a summative feedback to candidates at the end of an interview. Most feedback system predicts values of social indicators and communication cues, leaving the interpretation open to the user. Our system directly predicts an actionable feedback that leaves the candidate with a tangible take away at the end of the interview. We approached placement trainers and made a list of most common feedback that is given during training and we attempt to predict them directly. Towards this front,we captured data from over 145 participants in an interview like environment. Designing intelligent training environments for job interview preparation using a video data corpus is a demanding task due to its complex correlations and multimodal interactions. We used several state-of-the-art machine learning algorithms with manual annotation as ground truth. The predictive models were built with a focus on nonverbal communication cues so as to reduce the task of addressing the challenges faced in spoken language understanding and task modelling. We extracted audio and lexical features and our findings indicate a stronger correlation to audio and prosodic features in candidate assessment.Our best results gave an accuracy of 95% when the baseline accuracy was 77%.\",\"PeriodicalId\":441030,\"journal\":{\"name\":\"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3139513.3139515\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3139513.3139515","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Automatic generation of actionable feedback towards improving social competency in job interviews
Soft skill assessment is a vital aspect of a job interview process as these qualities are indicative of the candidates compatibility in the work environment, their negotiation skills, client interaction prowess and leadership flair among other factors. The rise in popularity of asynchronous video based job interviews has created the need for a scalable solution to gauge candidate performance and hence we turn to automation. In this research, we aim to build a system that automatically provides a summative feedback to candidates at the end of an interview. Most feedback system predicts values of social indicators and communication cues, leaving the interpretation open to the user. Our system directly predicts an actionable feedback that leaves the candidate with a tangible take away at the end of the interview. We approached placement trainers and made a list of most common feedback that is given during training and we attempt to predict them directly. Towards this front,we captured data from over 145 participants in an interview like environment. Designing intelligent training environments for job interview preparation using a video data corpus is a demanding task due to its complex correlations and multimodal interactions. We used several state-of-the-art machine learning algorithms with manual annotation as ground truth. The predictive models were built with a focus on nonverbal communication cues so as to reduce the task of addressing the challenges faced in spoken language understanding and task modelling. We extracted audio and lexical features and our findings indicate a stronger correlation to audio and prosodic features in candidate assessment.Our best results gave an accuracy of 95% when the baseline accuracy was 77%.