{"title":"线索语音感知中的视听整合:听力损失成人和正常听力成人在安静和噪音环境下语音识别的影响。","authors":"Cora Jirschik Caron, Coriandre Vilain, Jean-Luc Schwartz, Jacqueline Leybaert, Cécile Colin","doi":"10.1044/2025_JSLHR-24-00334","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to investigate audiovisual (AV) integration of cued speech (CS) gestures with the auditory input presented in quiet and amidst noise while controlling for visual speech decoding. Additionally, the study considered participants' auditory status and auditory abilities as well as their abilities to produce and decode CS in speech perception.</p><p><strong>Method: </strong>Thirty-one adults with hearing loss (HL) and proficient in CS decoding participated, alongside 52 adults with typical hearing (TH), consisting of 14 CS interpreters and 38 individuals naive regarding the system. The study employed a speech recognition test that presented CS gestures, lipreading, and lipreading integrated with CS gestures, either without sound or combined with speech sounds in quiet or amidst noise.</p><p><strong>Results: </strong>Participants with HL and lower auditory abilities integrated the auditory input with CS gestures and increased their recognition scores by 44% in quiet conditions of speech recognition. For participants with HL and higher auditory abilities, integrating CS gestures with the auditory input mixed with noise increased recognition scores by 43.1% over the auditory-only condition. For all participants with HL, CS integrated with lipreading produced optimal recognition regardless of their auditory abilities, while for those with TH, adding CS gestures did not enhance lipreading, and AV benefits were observed only when lipreading was integrated with the auditory input presented amidst noise.</p><p><strong>Conclusions: </strong>Individuals with HL are able to integrate CS gestures with auditory input. Visually supporting auditory speech with CS gestures improves speech recognition in noise and also in quiet conditions of communication for participants with HL and low auditory abilities.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-19"},"PeriodicalIF":2.2000,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Audiovisual Integration in Cued Speech Perception: Impact on Speech Recognition in Quiet and Noise Among Adults With Hearing Loss and Those With Typical Hearing.\",\"authors\":\"Cora Jirschik Caron, Coriandre Vilain, Jean-Luc Schwartz, Jacqueline Leybaert, Cécile Colin\",\"doi\":\"10.1044/2025_JSLHR-24-00334\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>This study aimed to investigate audiovisual (AV) integration of cued speech (CS) gestures with the auditory input presented in quiet and amidst noise while controlling for visual speech decoding. Additionally, the study considered participants' auditory status and auditory abilities as well as their abilities to produce and decode CS in speech perception.</p><p><strong>Method: </strong>Thirty-one adults with hearing loss (HL) and proficient in CS decoding participated, alongside 52 adults with typical hearing (TH), consisting of 14 CS interpreters and 38 individuals naive regarding the system. The study employed a speech recognition test that presented CS gestures, lipreading, and lipreading integrated with CS gestures, either without sound or combined with speech sounds in quiet or amidst noise.</p><p><strong>Results: </strong>Participants with HL and lower auditory abilities integrated the auditory input with CS gestures and increased their recognition scores by 44% in quiet conditions of speech recognition. For participants with HL and higher auditory abilities, integrating CS gestures with the auditory input mixed with noise increased recognition scores by 43.1% over the auditory-only condition. For all participants with HL, CS integrated with lipreading produced optimal recognition regardless of their auditory abilities, while for those with TH, adding CS gestures did not enhance lipreading, and AV benefits were observed only when lipreading was integrated with the auditory input presented amidst noise.</p><p><strong>Conclusions: </strong>Individuals with HL are able to integrate CS gestures with auditory input. Visually supporting auditory speech with CS gestures improves speech recognition in noise and also in quiet conditions of communication for participants with HL and low auditory abilities.</p>\",\"PeriodicalId\":520690,\"journal\":{\"name\":\"Journal of speech, language, and hearing research : JSLHR\",\"volume\":\" \",\"pages\":\"1-19\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-07-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of speech, language, and hearing research : JSLHR\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1044/2025_JSLHR-24-00334\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of speech, language, and hearing research : JSLHR","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1044/2025_JSLHR-24-00334","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Audiovisual Integration in Cued Speech Perception: Impact on Speech Recognition in Quiet and Noise Among Adults With Hearing Loss and Those With Typical Hearing.
Purpose: This study aimed to investigate audiovisual (AV) integration of cued speech (CS) gestures with the auditory input presented in quiet and amidst noise while controlling for visual speech decoding. Additionally, the study considered participants' auditory status and auditory abilities as well as their abilities to produce and decode CS in speech perception.
Method: Thirty-one adults with hearing loss (HL) and proficient in CS decoding participated, alongside 52 adults with typical hearing (TH), consisting of 14 CS interpreters and 38 individuals naive regarding the system. The study employed a speech recognition test that presented CS gestures, lipreading, and lipreading integrated with CS gestures, either without sound or combined with speech sounds in quiet or amidst noise.
Results: Participants with HL and lower auditory abilities integrated the auditory input with CS gestures and increased their recognition scores by 44% in quiet conditions of speech recognition. For participants with HL and higher auditory abilities, integrating CS gestures with the auditory input mixed with noise increased recognition scores by 43.1% over the auditory-only condition. For all participants with HL, CS integrated with lipreading produced optimal recognition regardless of their auditory abilities, while for those with TH, adding CS gestures did not enhance lipreading, and AV benefits were observed only when lipreading was integrated with the auditory input presented amidst noise.
Conclusions: Individuals with HL are able to integrate CS gestures with auditory input. Visually supporting auditory speech with CS gestures improves speech recognition in noise and also in quiet conditions of communication for participants with HL and low auditory abilities.