线索语音感知中的视听整合:听力损失成人和正常听力成人在安静和噪音环境下语音识别的影响。

IF 2.2
Cora Jirschik Caron, Coriandre Vilain, Jean-Luc Schwartz, Jacqueline Leybaert, Cécile Colin
{"title":"线索语音感知中的视听整合:听力损失成人和正常听力成人在安静和噪音环境下语音识别的影响。","authors":"Cora Jirschik Caron, Coriandre Vilain, Jean-Luc Schwartz, Jacqueline Leybaert, Cécile Colin","doi":"10.1044/2025_JSLHR-24-00334","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to investigate audiovisual (AV) integration of cued speech (CS) gestures with the auditory input presented in quiet and amidst noise while controlling for visual speech decoding. Additionally, the study considered participants' auditory status and auditory abilities as well as their abilities to produce and decode CS in speech perception.</p><p><strong>Method: </strong>Thirty-one adults with hearing loss (HL) and proficient in CS decoding participated, alongside 52 adults with typical hearing (TH), consisting of 14 CS interpreters and 38 individuals naive regarding the system. The study employed a speech recognition test that presented CS gestures, lipreading, and lipreading integrated with CS gestures, either without sound or combined with speech sounds in quiet or amidst noise.</p><p><strong>Results: </strong>Participants with HL and lower auditory abilities integrated the auditory input with CS gestures and increased their recognition scores by 44% in quiet conditions of speech recognition. For participants with HL and higher auditory abilities, integrating CS gestures with the auditory input mixed with noise increased recognition scores by 43.1% over the auditory-only condition. For all participants with HL, CS integrated with lipreading produced optimal recognition regardless of their auditory abilities, while for those with TH, adding CS gestures did not enhance lipreading, and AV benefits were observed only when lipreading was integrated with the auditory input presented amidst noise.</p><p><strong>Conclusions: </strong>Individuals with HL are able to integrate CS gestures with auditory input. Visually supporting auditory speech with CS gestures improves speech recognition in noise and also in quiet conditions of communication for participants with HL and low auditory abilities.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-19"},"PeriodicalIF":2.2000,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Audiovisual Integration in Cued Speech Perception: Impact on Speech Recognition in Quiet and Noise Among Adults With Hearing Loss and Those With Typical Hearing.\",\"authors\":\"Cora Jirschik Caron, Coriandre Vilain, Jean-Luc Schwartz, Jacqueline Leybaert, Cécile Colin\",\"doi\":\"10.1044/2025_JSLHR-24-00334\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>This study aimed to investigate audiovisual (AV) integration of cued speech (CS) gestures with the auditory input presented in quiet and amidst noise while controlling for visual speech decoding. Additionally, the study considered participants' auditory status and auditory abilities as well as their abilities to produce and decode CS in speech perception.</p><p><strong>Method: </strong>Thirty-one adults with hearing loss (HL) and proficient in CS decoding participated, alongside 52 adults with typical hearing (TH), consisting of 14 CS interpreters and 38 individuals naive regarding the system. The study employed a speech recognition test that presented CS gestures, lipreading, and lipreading integrated with CS gestures, either without sound or combined with speech sounds in quiet or amidst noise.</p><p><strong>Results: </strong>Participants with HL and lower auditory abilities integrated the auditory input with CS gestures and increased their recognition scores by 44% in quiet conditions of speech recognition. For participants with HL and higher auditory abilities, integrating CS gestures with the auditory input mixed with noise increased recognition scores by 43.1% over the auditory-only condition. For all participants with HL, CS integrated with lipreading produced optimal recognition regardless of their auditory abilities, while for those with TH, adding CS gestures did not enhance lipreading, and AV benefits were observed only when lipreading was integrated with the auditory input presented amidst noise.</p><p><strong>Conclusions: </strong>Individuals with HL are able to integrate CS gestures with auditory input. Visually supporting auditory speech with CS gestures improves speech recognition in noise and also in quiet conditions of communication for participants with HL and low auditory abilities.</p>\",\"PeriodicalId\":520690,\"journal\":{\"name\":\"Journal of speech, language, and hearing research : JSLHR\",\"volume\":\" \",\"pages\":\"1-19\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-07-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of speech, language, and hearing research : JSLHR\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1044/2025_JSLHR-24-00334\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of speech, language, and hearing research : JSLHR","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1044/2025_JSLHR-24-00334","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目的:在控制视觉语音解码的情况下,研究在安静和噪声环境下线索语音手势与听觉输入的视听整合。此外,本研究还考虑了被试的听觉状态和听觉能力,以及他们在语音感知中产生和解码CS的能力。方法:31名听力损失(HL)且精通CS解码的成年人与52名正常听力(TH)的成年人一起参与,其中包括14名CS口译员和38名对系统不熟悉的人。该研究采用语音识别测试,在安静或噪音环境下,分别呈现CS手势、唇读和与CS手势相结合的唇读,或者没有声音,或者与语音相结合。结果:听觉能力较低的HL被试将听觉输入与CS手势相结合,在安静条件下语音识别得分提高44%。对于HL和更高听觉能力的参与者,将CS手势与混合噪音的听觉输入相结合,比仅听觉条件下的识别得分提高了43.1%。对于所有HL的参与者,无论他们的听觉能力如何,CS与唇读相结合都能产生最佳的识别效果,而对于TH的参与者,添加CS手势并没有增强唇读,只有当唇读与噪音中呈现的听觉输入相结合时,才能观察到AV的好处。结论:HL患者能够将CS手势与听觉输入相结合。视觉上支持听觉语音的CS手势可以提高HL和低听觉能力参与者在噪音和安静条件下的语音识别能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Audiovisual Integration in Cued Speech Perception: Impact on Speech Recognition in Quiet and Noise Among Adults With Hearing Loss and Those With Typical Hearing.

Purpose: This study aimed to investigate audiovisual (AV) integration of cued speech (CS) gestures with the auditory input presented in quiet and amidst noise while controlling for visual speech decoding. Additionally, the study considered participants' auditory status and auditory abilities as well as their abilities to produce and decode CS in speech perception.

Method: Thirty-one adults with hearing loss (HL) and proficient in CS decoding participated, alongside 52 adults with typical hearing (TH), consisting of 14 CS interpreters and 38 individuals naive regarding the system. The study employed a speech recognition test that presented CS gestures, lipreading, and lipreading integrated with CS gestures, either without sound or combined with speech sounds in quiet or amidst noise.

Results: Participants with HL and lower auditory abilities integrated the auditory input with CS gestures and increased their recognition scores by 44% in quiet conditions of speech recognition. For participants with HL and higher auditory abilities, integrating CS gestures with the auditory input mixed with noise increased recognition scores by 43.1% over the auditory-only condition. For all participants with HL, CS integrated with lipreading produced optimal recognition regardless of their auditory abilities, while for those with TH, adding CS gestures did not enhance lipreading, and AV benefits were observed only when lipreading was integrated with the auditory input presented amidst noise.

Conclusions: Individuals with HL are able to integrate CS gestures with auditory input. Visually supporting auditory speech with CS gestures improves speech recognition in noise and also in quiet conditions of communication for participants with HL and low auditory abilities.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信