Yuqi Xia, Lei Ren, Xuehao Zhang, Yan Huang, Chaogang Wei, Yuhe Liu
{"title":"助听器对双峰型听者普通话语音情绪识别的影响。","authors":"Yuqi Xia, Lei Ren, Xuehao Zhang, Yan Huang, Chaogang Wei, Yuhe Liu","doi":"10.1044/2025_JSLHR-23-00191","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Cochlear implant (CI) listeners have deficits in emotional perception due to limited spectrotemporal fine structure. Contralateral hearing aids (HAs) carry additional acoustic cues for emotion recognition and improve the quality of life (QoL) in these individuals. This study aimed to investigate the effects of HAs on voice emotion recognition in Mandarin-speaking bimodal adults.</p><p><strong>Method: </strong>Nineteen Mandarin-speaking bimodal adults (<i>M</i><sub>age</sub> = 30.63 ± 8.73 years) and 20 normal-hearing (NH) adults (<i>M</i><sub>age</sub> = 27.15 ± 4.61 years) completed voice emotion (happy, angry, sad, scared, and neutral) recognition and monosyllable recognition tasks. Bimodal listeners completed voice emotion recognition and monosyllable recognition tasks with bimodal listening and CI-alone listening. Health-related QoL in bimodal listeners was evaluated using the Chinese version of the Nijmegen Cochlear Implant Questionnaire (NCIQ).</p><p><strong>Results: </strong>Acoustic analyses showed substantial variations across emotions in voice emotion utterances, mainly in measures of the mean fundamental frequency (<i>F</i>0), <i>F</i>0 range, and duration. NH listeners significantly outperformed bimodal listeners in voice emotion recognition and monosyllable recognition tasks, with significantly higher accuracy scores, Hu values, and shorter reaction times. Participants were mainly affected by <i>F</i>0 cues in the voice emotion recognition task. Bimodal listeners perceived voice emotions more accurately and faster with bimodal devices than with CI alone, suggesting improved accuracy and decreased listening effort with the addition of HAs. Voice emotion recognition accuracy was associated with residual hearing in the nonimplanted ear and monosyllable recognition accuracy in bimodal listeners. The NCIQ scores were not significantly correlated with the accuracy scores for either speech recognition or voice emotion recognition in bimodal listeners after correction for multiple comparisons.</p><p><strong>Conclusions: </strong>Despite experiencing more challenges than NH peers, Mandarin-speaking bimodal listeners showed improved voice emotion perception when using contralateral HAs. Bimodal listeners with better residual hearing in the nonimplanted ear and better speech recognition ability showed better voice emotion perception.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-19"},"PeriodicalIF":2.2000,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Effects of Hearing Aids on Mandarin Voice Emotion Recognition With Bimodal Listeners.\",\"authors\":\"Yuqi Xia, Lei Ren, Xuehao Zhang, Yan Huang, Chaogang Wei, Yuhe Liu\",\"doi\":\"10.1044/2025_JSLHR-23-00191\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Cochlear implant (CI) listeners have deficits in emotional perception due to limited spectrotemporal fine structure. Contralateral hearing aids (HAs) carry additional acoustic cues for emotion recognition and improve the quality of life (QoL) in these individuals. This study aimed to investigate the effects of HAs on voice emotion recognition in Mandarin-speaking bimodal adults.</p><p><strong>Method: </strong>Nineteen Mandarin-speaking bimodal adults (<i>M</i><sub>age</sub> = 30.63 ± 8.73 years) and 20 normal-hearing (NH) adults (<i>M</i><sub>age</sub> = 27.15 ± 4.61 years) completed voice emotion (happy, angry, sad, scared, and neutral) recognition and monosyllable recognition tasks. Bimodal listeners completed voice emotion recognition and monosyllable recognition tasks with bimodal listening and CI-alone listening. Health-related QoL in bimodal listeners was evaluated using the Chinese version of the Nijmegen Cochlear Implant Questionnaire (NCIQ).</p><p><strong>Results: </strong>Acoustic analyses showed substantial variations across emotions in voice emotion utterances, mainly in measures of the mean fundamental frequency (<i>F</i>0), <i>F</i>0 range, and duration. NH listeners significantly outperformed bimodal listeners in voice emotion recognition and monosyllable recognition tasks, with significantly higher accuracy scores, Hu values, and shorter reaction times. Participants were mainly affected by <i>F</i>0 cues in the voice emotion recognition task. Bimodal listeners perceived voice emotions more accurately and faster with bimodal devices than with CI alone, suggesting improved accuracy and decreased listening effort with the addition of HAs. Voice emotion recognition accuracy was associated with residual hearing in the nonimplanted ear and monosyllable recognition accuracy in bimodal listeners. The NCIQ scores were not significantly correlated with the accuracy scores for either speech recognition or voice emotion recognition in bimodal listeners after correction for multiple comparisons.</p><p><strong>Conclusions: </strong>Despite experiencing more challenges than NH peers, Mandarin-speaking bimodal listeners showed improved voice emotion perception when using contralateral HAs. Bimodal listeners with better residual hearing in the nonimplanted ear and better speech recognition ability showed better voice emotion perception.</p>\",\"PeriodicalId\":520690,\"journal\":{\"name\":\"Journal of speech, language, and hearing research : JSLHR\",\"volume\":\" \",\"pages\":\"1-19\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-07-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of speech, language, and hearing research : JSLHR\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1044/2025_JSLHR-23-00191\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of speech, language, and hearing research : JSLHR","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1044/2025_JSLHR-23-00191","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Effects of Hearing Aids on Mandarin Voice Emotion Recognition With Bimodal Listeners.
Purpose: Cochlear implant (CI) listeners have deficits in emotional perception due to limited spectrotemporal fine structure. Contralateral hearing aids (HAs) carry additional acoustic cues for emotion recognition and improve the quality of life (QoL) in these individuals. This study aimed to investigate the effects of HAs on voice emotion recognition in Mandarin-speaking bimodal adults.
Method: Nineteen Mandarin-speaking bimodal adults (Mage = 30.63 ± 8.73 years) and 20 normal-hearing (NH) adults (Mage = 27.15 ± 4.61 years) completed voice emotion (happy, angry, sad, scared, and neutral) recognition and monosyllable recognition tasks. Bimodal listeners completed voice emotion recognition and monosyllable recognition tasks with bimodal listening and CI-alone listening. Health-related QoL in bimodal listeners was evaluated using the Chinese version of the Nijmegen Cochlear Implant Questionnaire (NCIQ).
Results: Acoustic analyses showed substantial variations across emotions in voice emotion utterances, mainly in measures of the mean fundamental frequency (F0), F0 range, and duration. NH listeners significantly outperformed bimodal listeners in voice emotion recognition and monosyllable recognition tasks, with significantly higher accuracy scores, Hu values, and shorter reaction times. Participants were mainly affected by F0 cues in the voice emotion recognition task. Bimodal listeners perceived voice emotions more accurately and faster with bimodal devices than with CI alone, suggesting improved accuracy and decreased listening effort with the addition of HAs. Voice emotion recognition accuracy was associated with residual hearing in the nonimplanted ear and monosyllable recognition accuracy in bimodal listeners. The NCIQ scores were not significantly correlated with the accuracy scores for either speech recognition or voice emotion recognition in bimodal listeners after correction for multiple comparisons.
Conclusions: Despite experiencing more challenges than NH peers, Mandarin-speaking bimodal listeners showed improved voice emotion perception when using contralateral HAs. Bimodal listeners with better residual hearing in the nonimplanted ear and better speech recognition ability showed better voice emotion perception.