Elisabeth Germanier , Mutian He , Amina Mardiyyah Rufai , Philip N. Garner , Adrian Bangerter , Laetitia A. Renier , Marianne Schmid Mast , Koralie Orji
{"title":"Identifying storytelling in job interviews using deep learning","authors":"Elisabeth Germanier , Mutian He , Amina Mardiyyah Rufai , Philip N. Garner , Adrian Bangerter , Laetitia A. Renier , Marianne Schmid Mast , Koralie Orji","doi":"10.1016/j.chbr.2025.100688","DOIUrl":null,"url":null,"abstract":"<div><div>Structured interviews often include past-behavior questions inviting applicants to recount a past work experience. While optimal responses to these questions should take the form of a story, applicants struggle to produce them extemporaneously. Asynchronous video interviews (AVIs) present new opportunities for job interview coaching, which can incorporate artificial intelligence to analyze audio-recorded responses and deliver personalized feedback. We explore the potential of audio-based deep-learning models to identify storytelling and other, sub-optimal responses (pseudo-stories, decontextualized self-descriptions) from interview audio recordings. Using data from 254 mock interviews featuring three past-behavior questions, we developed models to determine the utterance type, considering different scenarios and labeling schemes of varying granularity. We further applied multiple techniques to improve the model accuracy. Findings show that our models achieve satisfactory performance when enhanced with audio information and enriched with longer context (best accuracy: 77.67%) However, providing paralinguistic cues from the audio recordings did not help improve the models’ performance. We discuss the results, implications, and future research directions.</div></div>","PeriodicalId":72681,"journal":{"name":"Computers in human behavior reports","volume":"19 ","pages":"Article 100688"},"PeriodicalIF":5.8000,"publicationDate":"2025-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in human behavior reports","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2451958825001034","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
Structured interviews often include past-behavior questions inviting applicants to recount a past work experience. While optimal responses to these questions should take the form of a story, applicants struggle to produce them extemporaneously. Asynchronous video interviews (AVIs) present new opportunities for job interview coaching, which can incorporate artificial intelligence to analyze audio-recorded responses and deliver personalized feedback. We explore the potential of audio-based deep-learning models to identify storytelling and other, sub-optimal responses (pseudo-stories, decontextualized self-descriptions) from interview audio recordings. Using data from 254 mock interviews featuring three past-behavior questions, we developed models to determine the utterance type, considering different scenarios and labeling schemes of varying granularity. We further applied multiple techniques to improve the model accuracy. Findings show that our models achieve satisfactory performance when enhanced with audio information and enriched with longer context (best accuracy: 77.67%) However, providing paralinguistic cues from the audio recordings did not help improve the models’ performance. We discuss the results, implications, and future research directions.