Angkana Lertpoompunya, Nathan C Higgins, Erol J. Ozmeral, D. Eddins
{"title":"Head movement during natural group conversation and inter-annotator agreement on manual annotation","authors":"Angkana Lertpoompunya, Nathan C Higgins, Erol J. Ozmeral, D. Eddins","doi":"10.1121/10.0022958","DOIUrl":null,"url":null,"abstract":"During speech communication and conversational turn-taking, listeners direct their head and eyes to receive meaningful auditory and visual cues. Features of these behaviors may convey listener intent. This study designed a test environment, data collection protocol and procedures, and investigated head movement behaviors during self-driven conversations among multiple partners. Nine participants were tested in cohorts of three. Participants wore a headset with sensors tracked by an infrared camera system. Participants watched an audio-video clip, followed by a 5-min undirected discussion. The entire session was video recorded for annotation purposes. Two annotators independently coded the video files using the EUDICO Linguistic Annotator software application. Annotations were then co-registered with the head tracking data in post processing. Inter-annotator agreement demonstrated the desired reliability, thereby validating the procedures designed. Movement trajectories showed that there were individual differences on the head yaw distribution. The combination of objective measures of head movement and manual annotation of conversation behaviors provides a rich data set for characterizing natural conversations in ecologically valid settings. The measurement procedures and coding system developed here is a first step towards characterizing head movements during conversations needed to predict listening intent and to create actions based on those predictions.","PeriodicalId":256727,"journal":{"name":"The Journal of the Acoustical Society of America","volume":"30 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of the Acoustical Society of America","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1121/10.0022958","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
During speech communication and conversational turn-taking, listeners direct their head and eyes to receive meaningful auditory and visual cues. Features of these behaviors may convey listener intent. This study designed a test environment, data collection protocol and procedures, and investigated head movement behaviors during self-driven conversations among multiple partners. Nine participants were tested in cohorts of three. Participants wore a headset with sensors tracked by an infrared camera system. Participants watched an audio-video clip, followed by a 5-min undirected discussion. The entire session was video recorded for annotation purposes. Two annotators independently coded the video files using the EUDICO Linguistic Annotator software application. Annotations were then co-registered with the head tracking data in post processing. Inter-annotator agreement demonstrated the desired reliability, thereby validating the procedures designed. Movement trajectories showed that there were individual differences on the head yaw distribution. The combination of objective measures of head movement and manual annotation of conversation behaviors provides a rich data set for characterizing natural conversations in ecologically valid settings. The measurement procedures and coding system developed here is a first step towards characterizing head movements during conversations needed to predict listening intent and to create actions based on those predictions.