{"title":"探讨各种声学特征在粤语声乐情感中的贡献。","authors":"Dong Han, Yike Yang","doi":"10.1044/2025_JSLHR-24-00677","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to investigate the acoustic patterns of six emotions and a neutral state in Cantonese speech by focusing on the prosodic modulations that convey emotional content in this tonal language, which has six lexical tones.</p><p><strong>Method: </strong>We employed the extended Geneva minimalistic acoustic parameter set to systematically analyze the acoustic features of 3,474 recordings from the Cantonese Audio-Visual Emotional Speech Database. Linear mixed-effects models were fitted to examine variations in acoustic parameters across emotional states. Decision tree models were used to assess the relative contributions of 22 acoustic parameters in classifying emotions.</p><p><strong>Results: </strong>By fitting linear mixed-effects models, our results revealed statistically significant variations in most of the acoustic parameters across diverse emotional states. The decision tree models showed the relative contributions of 22 acoustic parameters in the classification of emotions, with spectral parameters accounting for 65.45% of the significance in distinguishing all seven emotional states, significantly exceeding other groups of features.</p><p><strong>Conclusions: </strong>Our findings highlight the unique characteristics of emotional expression in Cantonese, in which spectral parameters play a more significant role compared to the frequency-related parameters that are often emphasized in nontonal languages. Our results contribute significantly to understanding vocal emotion expression in tonal languages and are particularly useful for designing emotion-recognition systems and hearing aids that are tailored to tonal language environments. Furthermore, these insights have potential implications for enhancing emotional communication and cognitive training interventions for Cantonese-speaking individuals who use hearing aids or have cochlear implants, are on the autism spectrum, or have Alzheimer's disease.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-12"},"PeriodicalIF":2.2000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring the Contributions of Various Acoustic Features in Cantonese Vocal Emotions.\",\"authors\":\"Dong Han, Yike Yang\",\"doi\":\"10.1044/2025_JSLHR-24-00677\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>The aim of this study was to investigate the acoustic patterns of six emotions and a neutral state in Cantonese speech by focusing on the prosodic modulations that convey emotional content in this tonal language, which has six lexical tones.</p><p><strong>Method: </strong>We employed the extended Geneva minimalistic acoustic parameter set to systematically analyze the acoustic features of 3,474 recordings from the Cantonese Audio-Visual Emotional Speech Database. Linear mixed-effects models were fitted to examine variations in acoustic parameters across emotional states. Decision tree models were used to assess the relative contributions of 22 acoustic parameters in classifying emotions.</p><p><strong>Results: </strong>By fitting linear mixed-effects models, our results revealed statistically significant variations in most of the acoustic parameters across diverse emotional states. The decision tree models showed the relative contributions of 22 acoustic parameters in the classification of emotions, with spectral parameters accounting for 65.45% of the significance in distinguishing all seven emotional states, significantly exceeding other groups of features.</p><p><strong>Conclusions: </strong>Our findings highlight the unique characteristics of emotional expression in Cantonese, in which spectral parameters play a more significant role compared to the frequency-related parameters that are often emphasized in nontonal languages. Our results contribute significantly to understanding vocal emotion expression in tonal languages and are particularly useful for designing emotion-recognition systems and hearing aids that are tailored to tonal language environments. Furthermore, these insights have potential implications for enhancing emotional communication and cognitive training interventions for Cantonese-speaking individuals who use hearing aids or have cochlear implants, are on the autism spectrum, or have Alzheimer's disease.</p>\",\"PeriodicalId\":520690,\"journal\":{\"name\":\"Journal of speech, language, and hearing research : JSLHR\",\"volume\":\" \",\"pages\":\"1-12\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of speech, language, and hearing research : JSLHR\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1044/2025_JSLHR-24-00677\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of speech, language, and hearing research : JSLHR","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1044/2025_JSLHR-24-00677","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Exploring the Contributions of Various Acoustic Features in Cantonese Vocal Emotions.
Purpose: The aim of this study was to investigate the acoustic patterns of six emotions and a neutral state in Cantonese speech by focusing on the prosodic modulations that convey emotional content in this tonal language, which has six lexical tones.
Method: We employed the extended Geneva minimalistic acoustic parameter set to systematically analyze the acoustic features of 3,474 recordings from the Cantonese Audio-Visual Emotional Speech Database. Linear mixed-effects models were fitted to examine variations in acoustic parameters across emotional states. Decision tree models were used to assess the relative contributions of 22 acoustic parameters in classifying emotions.
Results: By fitting linear mixed-effects models, our results revealed statistically significant variations in most of the acoustic parameters across diverse emotional states. The decision tree models showed the relative contributions of 22 acoustic parameters in the classification of emotions, with spectral parameters accounting for 65.45% of the significance in distinguishing all seven emotional states, significantly exceeding other groups of features.
Conclusions: Our findings highlight the unique characteristics of emotional expression in Cantonese, in which spectral parameters play a more significant role compared to the frequency-related parameters that are often emphasized in nontonal languages. Our results contribute significantly to understanding vocal emotion expression in tonal languages and are particularly useful for designing emotion-recognition systems and hearing aids that are tailored to tonal language environments. Furthermore, these insights have potential implications for enhancing emotional communication and cognitive training interventions for Cantonese-speaking individuals who use hearing aids or have cochlear implants, are on the autism spectrum, or have Alzheimer's disease.