Trends in HearingPub Date : 2025-01-01Epub Date: 2025-06-27DOI: 10.1177/23312165251347138
Pam Dawson, Amanda Fullerton, Harish Krishnamoorthi, Kerrie Plant, Robert Cowan, Nadine Buczak, Christopher Long, Chris J James, Fergio Sismono, Andreas Büchner
{"title":"A Prospective, Multicentre Case-Control Trial Examining Factors That Explain Variable Clinical Performance in Post Lingual Adult CI Recipients.","authors":"Pam Dawson, Amanda Fullerton, Harish Krishnamoorthi, Kerrie Plant, Robert Cowan, Nadine Buczak, Christopher Long, Chris J James, Fergio Sismono, Andreas Büchner","doi":"10.1177/23312165251347138","DOIUrl":"10.1177/23312165251347138","url":null,"abstract":"<p><p>This study investigated which of a range of factors could explain performance in two distinct groups of experienced, adult cochlear implant recipients differentiated by performance on words in quiet: 72 with poorer word scores versus 77 with better word scores. Tests measured the potential contribution of sound processor mapping, electrode placement, neural health, impedance, cognitive, and patient-related factors in predicting performance. A systematically measured sound processor MAP was compared to the subject's walk-in MAP. Electrode placement included modiolar distance, basal and apical insertion angle, and presence of scalar translocation. Neural health measurements included bipolar thresholds, polarity effect using asymmetrical pulses, and evoked compound action potential (ECAP) measures such as the interphase gap (IPG) effect, total refractory time, and panoramic ECAP. Impedance measurements included trans impedance matrix and four-point impedance. Cognitive tests comprised vocabulary ability, the Stroop test, and the Symbol Digits Modality Test. Performance was measured with words in quiet and sentence in noise tests and basic auditory sensitivity measures including phoneme discrimination in noise and quiet, amplitude modulation detection thresholds and quick spectral modulation detection. A range of predictor variables accounted for between 33% and 60% of the variability in performance outcomes. Multivariable regression analyses showed four key factors that were consistently predictive of poorer performance across several outcomes: substantially underfitted sound processor MAP thresholds, higher average bipolar thresholds, greater total refractory time, and greater IPG offset. Scalar translocation, cognitive variables, and other patient related factors were also significant predictors across more than one performance outcome.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251347138"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12205208/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144508936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01Epub Date: 2025-09-19DOI: 10.1177/23312165251375892
Ruijing Ning, Carine Signoret, Emil Holmer, Henrik Danielsson
{"title":"Hearing Aid Use is Associated with Faster Visual Lexical Decision.","authors":"Ruijing Ning, Carine Signoret, Emil Holmer, Henrik Danielsson","doi":"10.1177/23312165251375892","DOIUrl":"10.1177/23312165251375892","url":null,"abstract":"<p><p>This study investigates the impact of hearing aid (HA) use on visual lexical decision (LD) performance in individuals with hearing loss. We hypothesize that HA use benefits phonological processing and leads to faster and more accurate visual LD. We compared the visual LD performance among three groups: 92 short-term HA users (<5 years), 98 long-term HA users, and 55 nonusers, while controlling for hearing level, age, and years of education. Results showed that, compared with non-HA users, HA users showed significantly faster reaction times in visual LD, specifically, long-term HA use was associated with smaller difference in reaction time for pseudowords compared to nonwords. These results suggest that HA use is associated with faster visual word recognition, potentially reflecting enhanced cognitive functions beyond auditory processing. These findings point to possible cognitive advantages linked to HA use.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251375892"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12449647/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145092808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01DOI: 10.1177/23312165251320789
Michael L Smith, Matthew B Winn
{"title":"Repairing Misperceptions of Words Early in a Sentence is More Effortful Than Repairing Later Words, Especially for Listeners With Cochlear Implants.","authors":"Michael L Smith, Matthew B Winn","doi":"10.1177/23312165251320789","DOIUrl":"10.1177/23312165251320789","url":null,"abstract":"<p><p>The process of repairing misperceptions has been identified as a contributor to effortful listening in people who use cochlear implants (CIs). The current study was designed to examine the relative cost of repairing misperceptions at earlier or later parts of a sentence that contained contextual information that could be used to infer words both predictively and retroactively. Misperceptions were enforced at specific times by replacing single words with noise. Changes in pupil dilation were analyzed to track differences in the timing and duration of effort, comparing listeners with typical hearing (TH) or with CIs. Increases in pupil dilation were time-locked to the moment of the missing word, with longer-lasting increases when the missing word was earlier in the sentence. Compared to listeners with TH, CI listeners showed elevated pupil dilation for longer periods of time after listening, suggesting a lingering effect of effort after sentence offset. When needing to mentally repair missing words, CI listeners also made more mistakes on words elsewhere in the sentence, even though these words were not masked. Changes in effort based on the position of the missing word were not evident in basic measures like peak pupil dilation and only emerged when the full-time course was analyzed, suggesting the timing analysis adds new information to our understanding of listening effort. These results demonstrate that some mistakes are more costly than others and incur different levels of mental effort to resolve the mistake, underscoring the information lost when characterizing speech perception with simple measures like percent-correct scores.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320789"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11851752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01Epub Date: 2025-05-28DOI: 10.1177/23312165251344947
Raphael Cueille, Mathieu Lavandier
{"title":"Binaural Speech Intelligibility in Noise and Reverberation: Prediction of Group Performance for Normal-hearing and Hearing-impaired Listeners.","authors":"Raphael Cueille, Mathieu Lavandier","doi":"10.1177/23312165251344947","DOIUrl":"10.1177/23312165251344947","url":null,"abstract":"<p><p>A binaural model is proposed to predict speech intelligibility in rooms for normal-hearing (NH) and hearing-impaired listener groups, combining the advantages of two existing models. The <i>leclere2015</i> model takes binaural room impulse responses (BRIRs) as inputs and accounts for the temporal smearing of the speech by reverberation, but only works with stationary noises for NH listeners. The <i>vicente2020</i> model takes the speech and noise signals at the ears as well as the listener audiogram as inputs and accounts for modulations in the noise and hearing loss, but cannot predict the temporal smearing of the speech by reverberation. The new model takes the audiogram, BRIRs and ear signals as inputs to account for the temporal smearing of the speech, the masker modulations and hearing loss. It gave accurate predictions for speech reception thresholds measured in seven experiments. The proposed model can do predictions that neither of the two original models can make when the target speech is influenced by reverberation and the noise has modulations and/or the listeners have hearing loss. In terms of model parameters, four methods were compared to separate the early and late reverberation, and two methods were compared to account for hearing loss.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251344947"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12120292/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01Epub Date: 2025-05-14DOI: 10.1177/23312165251340864
Emily Buss, Margaret E Richter, Amanda D Sloop, Margaret T Dillon
{"title":"Estimating Cochlear Implant Users' Sound Localization Abilities With Two Loudspeakers.","authors":"Emily Buss, Margaret E Richter, Amanda D Sloop, Margaret T Dillon","doi":"10.1177/23312165251340864","DOIUrl":"https://doi.org/10.1177/23312165251340864","url":null,"abstract":"<p><p>The ability to tell where sound sources are in space is ecologically important for spatial awareness and communication in multisource environments. While hearing aids and cochlear implants (CIs) can support spatial hearing for some users, this ability is not routinely assessed clinically. The present study compared sound source localization for a 200-ms speech-shaped noise presented using real sources at 18° intervals from -54° to +54° azimuth and virtual sources that were simulated using amplitude panning with sources at -54° and +54°. Participants were 34 adult CI or electric-acoustic stimulation users, including individuals with single-sided deafness or aided acoustic hearing. The pattern of localization errors by participant was broadly similar for real and virtual sources, with some modest differences. For example, the root mean square (RMS) error for these two conditions was correlated at <i>r</i> = .89 (<i>p</i> < .001), with a mean RMS elevation of 3.9° for virtual sources. These results suggest that sound source localization with two-speaker amplitude panning may provide clinically useful information when testing with real sources is infeasible.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251340864"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12078988/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is Noise Exposure Associated With Impaired Extended High Frequency Hearing Despite a Normal Audiogram? A Systematic Review and Meta-Analysis.","authors":"Sajana Aryal, Monica Trevino, Hansapani Rodrigo, Srikanta Mishra","doi":"10.1177/23312165251343757","DOIUrl":"10.1177/23312165251343757","url":null,"abstract":"<p><p>Understanding the initial signature of noise-induced auditory damage remains a significant priority. Animal models suggest the cochlear base is particularly vulnerable to noise, raising the possibility that early-stage noise exposure could be linked to basal cochlear dysfunction, even when thresholds at 0.25-8 kHz are normal. To investigate this in humans, we conducted a meta-analysis following a systematic review, examining the association between noise exposure and hearing in frequencies from 9 to 20 kHz as a marker for basal cochlear dysfunction. Systematic review and meta-analysis followed PRISMA guidelines and the PICOS framework. Studies on noise exposure and hearing in the 9 to 20 kHz region in adults with clinically normal audiograms were included by searching five electronic databases (e.g., PubMed). Cohorts from 30 studies, comprising approximately 2,500 participants, were systematically reviewed. Meta-analysis was conducted on 23 studies using a random-effects model for occupational and recreational noise exposure. Analysis showed a significant positive association between occupational noise and hearing thresholds, with medium effect sizes at 9 and 11.2 kHz and large effect sizes at 10, 12, 14, and 16 kHz. However, the association with recreational noise was less consistent, with significant effects only at 12, 12.5, and 16 kHz. Egger's test indicated some publication bias, specifically at 10 kHz. Findings suggest thresholds above 8 kHz may indicate early noise exposure effects, even when lower-frequency (≤8 kHz) thresholds remain normal. Longitudinal studies incorporating noise dosimetry are crucial to establish causality and further support the clinical utility of extended high-frequency testing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251343757"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12084714/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01Epub Date: 2025-08-14DOI: 10.1177/23312165251367630
Federica Bianchi, Sindri Jonsson, Torben Christiansen, Elaine Hoi Ning Ng
{"title":"Pupillary Responses During a Dual Task: Effect of Noise Attenuation on the Timing of Cognitive Resource Allocation.","authors":"Federica Bianchi, Sindri Jonsson, Torben Christiansen, Elaine Hoi Ning Ng","doi":"10.1177/23312165251367630","DOIUrl":"10.1177/23312165251367630","url":null,"abstract":"<p><p>Although multitasking is a common everyday activity, it is often challenging. The aim of this study was to evaluate the effect of noise attenuation during an audio-visual dual task and investigate cognitive resource allocation over time via pupillometry. Twenty-six normal hearing participants performed a dual task consisting of a primary speech recognition task and a secondary visual reaction-time task, as well as a visual-only task. Four conditions were tested in the dual task: two speech levels (60- and 64-dB SPL) and two noise conditions (<i>No Attenuation</i> with noise at 70 dB SPL<i>; Attenuation</i> condition with noise attenuated by passive damping). Elevated pupillary responses for the N<i>o Attenuation</i> condition relative to the A<i>ttenuation</i> and visual-only conditions indicated that participants allocated additional resources on the primary task during the playback of the first part of the sentence, while reaction time to the secondary task increased significantly relative to the visual-only task. In the A<i>ttenuation</i> condition, participants performed the secondary task with a similar reaction time relative to the visual-only task (no dual-task cost), while pupillary responses revealed allocation of resources on the primary task after completion of the secondary task. These findings reveal that the temporal dynamics of cognitive resource allocation between primary and secondary task were affected by the level of background noise in the primary task. This study demonstrates that noise attenuation, as offered for example by audio devices, frees up cognitive resources in noisy listening environments and may be beneficial to improve performance and decrease dual-task costs during multitasking.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251367630"},"PeriodicalIF":3.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12357024/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144849442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01Epub Date: 2025-06-25DOI: 10.1177/23312165251345572
Susan E Voss, Aaron K Remenschneider, Rebecca M Farrar, Soomin Myoung, Nicholas J Horton
{"title":"Comprehensive Measurements and Analyses of Ear Canal Geometry From Late Infancy Through Late Adulthood: Age-Related Variations and Implications for Basic Science and Audiological Measurements.","authors":"Susan E Voss, Aaron K Remenschneider, Rebecca M Farrar, Soomin Myoung, Nicholas J Horton","doi":"10.1177/23312165251345572","DOIUrl":"10.1177/23312165251345572","url":null,"abstract":"<p><p>This study provides a comprehensive analysis of ear canal geometry from 0.7 to 91 years, based on high-resolution computed tomography scans of 221 ears. Quantified features include cross-sectional areas along the canal's length, total canal length, curvature, and key anatomical landmarks such as the first and second bends and the cartilage-to-bone transition. Significant developmental changes occur during the first 10 years of life, with adult-like characteristics emerging between ages 10 and 15 years, likely coinciding with puberty. Substantial interindividual variability is observed across all ages, particularly in the canal area. The canal becomes fully cartilaginous at and lateral to the second bend by 0.7 years, with further growth occurring only in the bony segment thereafter. These anatomical findings have important implications for audiologic threshold assessments, wideband acoustic immitance measures, age-appropriate hearing aid fitting schedules, and surgical planning, particularly in pediatric populations where anatomical variation is greatest.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251345572"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12198549/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144486732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01Epub Date: 2025-05-30DOI: 10.1177/23312165251347131
Björn Herrmann
{"title":"Language-agnostic, Automated Assessment of Listeners' Speech Recall Using Large Language Models.","authors":"Björn Herrmann","doi":"10.1177/23312165251347131","DOIUrl":"10.1177/23312165251347131","url":null,"abstract":"<p><p>Speech-comprehension difficulties are common among older people. Standard speech tests do not fully capture such difficulties because the tests poorly resemble the context-rich, story-like nature of ongoing conversation and are typically available only in a country's dominant/official language (e.g., English), leading to inaccurate scores for native speakers of other languages. Assessments for naturalistic, story speech in multiple languages require accurate, time-efficient scoring. The current research leverages modern large language models (LLMs) in native English speakers and native speakers of 10 other languages to automate the generation of high-quality, spoken stories and scoring of speech recall in different languages. Participants listened to and freely recalled short stories (in quiet/clear and in babble noise) in their native language. Large language model text-embeddings and LLM prompt engineering with semantic similarity analyses to score speech recall revealed sensitivity to known effects of temporal order, primacy/recency, and background noise, and high similarity of recall scores across languages. The work overcomes limitations associated with simple speech materials and testing of closed native-speaker groups because recall data of varying length and details can be mapped across languages with high accuracy. The full automation of speech generation and recall scoring provides an important step toward comprehension assessments of naturalistic speech with clinical applicability.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251347131"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12125525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144192395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01Epub Date: 2025-04-13DOI: 10.1177/23312165251333528
John Kyle Cooper, Jonas Vanthornhout, Astrid van Wieringen, Tom Francart
{"title":"Objectively Measuring Audiovisual Effects in Noise Using Virtual Human Speakers.","authors":"John Kyle Cooper, Jonas Vanthornhout, Astrid van Wieringen, Tom Francart","doi":"10.1177/23312165251333528","DOIUrl":"https://doi.org/10.1177/23312165251333528","url":null,"abstract":"<p><p>Speech intelligibility in challenging listening environments relies on the integration of audiovisual cues. Measuring the effectiveness of audiovisual integration in these challenging listening environments can be difficult due to the complexity of such environments. The Audiovisual True-to-Life Assessment of Auditory Rehabilitation (AVATAR) is a paradigm that was developed to provide an ecological environment to capture both the audio and visual aspects of speech intelligibility measures. Previous research has shown the benefit from audiovisual cues can be measured using behavioral (e.g., word recognition) and electrophysiological (e.g., neural tracking) measures. The current research examines, when using the AVATAR paradigm, if electrophysiological measures of speech intelligibility yield similar outcomes as behavioral measures. We hypothesized visual cues would enhance both the behavioral and electrophysiological scores as the signal-to-noise ratio (SNR) of the speech signal decreased. Twenty young (18-25 years old) participants (1 male and 19 female) with normal hearing participated in our study. For our behavioral experiment, we administered lists of sentences using an adaptive procedure to estimate a speech reception threshold (SRT). For our electrophysiological experiment, we administered 35 lists of sentences randomized across five SNR levels (silence, 0, -3, -6, and -9 dB) and two visual conditions (audio-only and audiovisual). We used a neural tracking decoder to measure the reconstruction accuracies for each participant. We observed most participants had higher reconstruction accuracies for the audiovisual condition compared to the audio-only condition in conditions with moderate to high levels of noise. We found the electrophysiological measure may correlate with the behavioral measure that shows audiovisual benefit.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251333528"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12033406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144043708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}