Spectral Grouping of Electrically Encoded Sound Predicts Speech-in-Noise Performance in Cochlear Implantees

Inyong Choi, Phillip E. Gander, Joel I. Berger, Jihwan Woo, Matthew H. Choy, Jean Hong, Sarah Colby, Bob McMurray, Timothy D. Griffiths
{"title":"Spectral Grouping of Electrically Encoded Sound Predicts Speech-in-Noise Performance in Cochlear Implantees","authors":"Inyong Choi, Phillip E. Gander, Joel I. Berger, Jihwan Woo, Matthew H. Choy, Jean Hong, Sarah Colby, Bob McMurray, Timothy D. Griffiths","doi":"10.1007/s10162-023-00918-x","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Objectives</h3><p>Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users.</p><h3 data-test=\"abstract-sub-heading\">Design</h3><p>Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users’ speech-in-noise performance that was not explained by spectral and temporal resolution.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli.</p>","PeriodicalId":17236,"journal":{"name":"Journal of the Association for Research in Otolaryngology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Association for Research in Otolaryngology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10162-023-00918-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives

Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users.

Design

Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks.

Results

No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users’ speech-in-noise performance that was not explained by spectral and temporal resolution.

Conclusion

Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli.

Abstract Image

电编码声音的频谱分组可预测人工耳蜗植入者的噪音中语音表现
目标人工耳蜗 (CI) 用户在理解噪声语音方面表现出很大的差异。过去针对 CI 使用者的研究发现,频谱和时间分辨率与噪声中的语音能力相关,但仍有很大一部分差异无法解释。最近对听力正常的听者进行的研究表明,在复杂的听觉场景中将时间和频谱上一致的音调分组的能力可以预测噪声中的语音能力,而与听力图无关,这凸显了有助于噪声中语音的听觉场景分析的核心机制。本研究考察了听觉分组能力是否也有助于 CI 用户的噪声语音理解能力。设计对 47 名耳聋后 CI 用户进行了测试,测试内容包括频谱和时间分辨率的心理物理测量、随机图形-地面任务(该任务取决于在随机背景下通过将多个固定频率元素分组来检测图形)以及噪声句子测量。结果没有发现任何预测变量之间存在共线性。在多元线性回归模型中,所有三个预测变量(频谱和时间分辨率加上图-地任务)都有显著贡献,这表明在复杂的听觉场景中,听觉分组能力可以解释 CI 用户噪声中言语能力差异的另一部分,而这部分差异是频谱和时间分辨率无法解释的。这种测量方法在临床上很容易应用,可作为人工耳蜗成功与否的预测指标,并提出了基于非语音刺激训练的潜在康复策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信