Alexandra Christine de Aguiar, Ana Carolina Constantini, Ronei Marcos de Moraes, Anna Alice Almeida
{"title":"Acoustic-prosodic measures discriminate the emotions of Brazilian portuguese speakers.","authors":"Alexandra Christine de Aguiar, Ana Carolina Constantini, Ronei Marcos de Moraes, Anna Alice Almeida","doi":"10.1590/2317-1782/e20240116pt","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To verify if there is a difference in acoustic-prosodic measures in different emotional states of speakers of Brazilian Portuguese (BP).</p><p><strong>Methods: </strong>The data sample consisted of 182 audio signals produced by actors (professionals or students), from the semi-spontaneous speech task \"Look at the blue plane\" in the various emotions (joy, sadness, fear, anger, surprise, disgust) and neutral emission. Values were extracted from acoustic-prosodic measures of duration, fundamental frequency and intensity of the various emotions. The Friedman comparison test was used to verify whether these measures are able to discriminate emotions.</p><p><strong>Results: </strong>The prosodic-acoustic analysis revealed significant variations between emotions. The disgust emotion stood out for having the highest rate of utterance, with higher values of duration. In contrast, the joy exhibited a more accelerated speech, with lower values of duration and greater intensity. Sadness and fear were marked by lower intensity and lower frequencies, and fear presented the lowest positive asymmetry values of z-score and z-smoothed, with less elongation of the segments. Anger was highlighted by the higher vocal intensity, while surprise recorded the highest values of fundamental frequency.</p><p><strong>Conclusion: </strong>The acoustic-prosodic measures proved to be effective tools for differentiating emotions in CP speakers. These parameters have great potential to discern different emotional states, broaden knowledge about vocal expressiveness and open possibilities for emotion recognition technologies with applications in artificial intelligence and mental health.</p>","PeriodicalId":46547,"journal":{"name":"CoDAS","volume":"37 4","pages":"e20240116"},"PeriodicalIF":0.8000,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12323398/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CoDAS","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1590/2317-1782/e20240116pt","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q4","JCRName":"AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: To verify if there is a difference in acoustic-prosodic measures in different emotional states of speakers of Brazilian Portuguese (BP).
Methods: The data sample consisted of 182 audio signals produced by actors (professionals or students), from the semi-spontaneous speech task "Look at the blue plane" in the various emotions (joy, sadness, fear, anger, surprise, disgust) and neutral emission. Values were extracted from acoustic-prosodic measures of duration, fundamental frequency and intensity of the various emotions. The Friedman comparison test was used to verify whether these measures are able to discriminate emotions.
Results: The prosodic-acoustic analysis revealed significant variations between emotions. The disgust emotion stood out for having the highest rate of utterance, with higher values of duration. In contrast, the joy exhibited a more accelerated speech, with lower values of duration and greater intensity. Sadness and fear were marked by lower intensity and lower frequencies, and fear presented the lowest positive asymmetry values of z-score and z-smoothed, with less elongation of the segments. Anger was highlighted by the higher vocal intensity, while surprise recorded the highest values of fundamental frequency.
Conclusion: The acoustic-prosodic measures proved to be effective tools for differentiating emotions in CP speakers. These parameters have great potential to discern different emotional states, broaden knowledge about vocal expressiveness and open possibilities for emotion recognition technologies with applications in artificial intelligence and mental health.