Inyong Choi, Phillip E. Gander, Joel I. Berger, Jihwan Woo, Matthew H. Choy, Jean Hong, Sarah Colby, Bob McMurray, Timothy D. Griffiths
{"title":"Spectral Grouping of Electrically Encoded Sound Predicts Speech-in-Noise Performance in Cochlear Implantees","authors":"Inyong Choi, Phillip E. Gander, Joel I. Berger, Jihwan Woo, Matthew H. Choy, Jean Hong, Sarah Colby, Bob McMurray, Timothy D. Griffiths","doi":"10.1007/s10162-023-00918-x","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Objectives</h3><p>Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users.</p><h3 data-test=\"abstract-sub-heading\">Design</h3><p>Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users’ speech-in-noise performance that was not explained by spectral and temporal resolution.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli.</p>","PeriodicalId":17236,"journal":{"name":"Journal of the Association for Research in Otolaryngology","volume":"17 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Association for Research in Otolaryngology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10162-023-00918-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives
Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users.
Design
Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks.
Results
No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users’ speech-in-noise performance that was not explained by spectral and temporal resolution.
Conclusion
Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli.
目标人工耳蜗 (CI) 用户在理解噪声语音方面表现出很大的差异。过去针对 CI 使用者的研究发现,频谱和时间分辨率与噪声中的语音能力相关,但仍有很大一部分差异无法解释。最近对听力正常的听者进行的研究表明,在复杂的听觉场景中将时间和频谱上一致的音调分组的能力可以预测噪声中的语音能力,而与听力图无关,这凸显了有助于噪声中语音的听觉场景分析的核心机制。本研究考察了听觉分组能力是否也有助于 CI 用户的噪声语音理解能力。设计对 47 名耳聋后 CI 用户进行了测试,测试内容包括频谱和时间分辨率的心理物理测量、随机图形-地面任务(该任务取决于在随机背景下通过将多个固定频率元素分组来检测图形)以及噪声句子测量。结果没有发现任何预测变量之间存在共线性。在多元线性回归模型中,所有三个预测变量(频谱和时间分辨率加上图-地任务)都有显著贡献,这表明在复杂的听觉场景中,听觉分组能力可以解释 CI 用户噪声中言语能力差异的另一部分,而这部分差异是频谱和时间分辨率无法解释的。这种测量方法在临床上很容易应用,可作为人工耳蜗成功与否的预测指标,并提出了基于非语音刺激训练的潜在康复策略。