Trends in HearingPub Date : 2025-01-01Epub Date: 2025-03-16DOI: 10.1177/23312165251317010
Timothy Beechey, Graham Naylor
{"title":"How Purposeful Adaptive Responses to Adverse Conditions Facilitate Successful Auditory Functioning: A Conceptual Model.","authors":"Timothy Beechey, Graham Naylor","doi":"10.1177/23312165251317010","DOIUrl":"10.1177/23312165251317010","url":null,"abstract":"<p><p>This paper describes a conceptual model of adaptive responses to adverse auditory conditions with the aim of providing a basis for better understanding the demands of, and opportunities for, successful real-life auditory functioning. We review examples of behaviors that facilitate auditory functioning in adverse conditions. Next, we outline the concept of purpose-driven behavior and describe how changing behavior can ensure stable performance in a changing environment. We describe how tasks and environments (both physical and social) dictate which behaviors are possible and effective facilitators of auditory functioning, and how hearing disability may be understood in terms of capacity to adapt to the environment. A conceptual model of adaptive cognitive, physical, and linguistic responses within a moderating negative feedback system is presented along with implications for the interpretation of auditory experiments which seek to predict functioning outside the laboratory or clinic. We argue that taking account of how people can improve their own performance by adapting their behavior and modifying their environment may contribute to more robust and generalizable experimental findings.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317010"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11912170/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01Epub Date: 2025-05-27DOI: 10.1177/23312165251342436
Nuphar Singer, Yael Zaltz
{"title":"Auditory Learning and Generalization in Older Adults: Evidence from Voice Discrimination Training.","authors":"Nuphar Singer, Yael Zaltz","doi":"10.1177/23312165251342436","DOIUrl":"10.1177/23312165251342436","url":null,"abstract":"<p><p>Auditory learning is essential for adapting to continuously changing acoustic environments. This adaptive capability, however, may be impacted by age-related declines in sensory and cognitive functions, potentially limiting learning efficiency and generalization in older adults. This study investigated auditory learning and generalization in 24 older (65-82 years) and 24 younger (18-34 years) adults through voice discrimination (VD) training. Participants were divided into training (12 older, 12 younger adults) and control groups (12 older, 12 younger adults). Trained participants completed five sessions: Two testing sessions assessing VD performance using a 2-down 1-up adaptive procedure with F0-only, formant-only, and combined F0 + formant cues, and three training sessions focusing exclusively on VD with F0 cues. Control groups participated only in the two testing sessions, with no intermediate training. Results revealed significant training-induced improvements in VD with F0 cues for both younger and older adults, with comparable learning efficiency and gains across groups. However, generalization to the formant-only cue was observed only in younger adults, suggesting limited learning transfer in older adults. Additionally, VD training did not improve performance in the combined F0 + formant condition beyond control group improvements, underscoring the specificity of perceptual learning. These findings provide novel insights into auditory learning in older adults, showing that while they retain the ability for significant auditory skill acquisition, age-related declines in perceptual flexibility may limit broader generalization. This study highlights the importance of designing targeted auditory interventions for older adults, considering their specific limitations in generalizing learning gains across different acoustic cues.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251342436"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12117233/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01Epub Date: 2025-03-18DOI: 10.1177/23312165251317027
Pedro Lladó, Piotr Majdak, Roberto Barumerli, Robert Baumgartner
{"title":"Spectral Weighting of Monaural Cues for Auditory Localization in Sagittal Planes.","authors":"Pedro Lladó, Piotr Majdak, Roberto Barumerli, Robert Baumgartner","doi":"10.1177/23312165251317027","DOIUrl":"10.1177/23312165251317027","url":null,"abstract":"<p><p>Localization of sound sources in sagittal planes significantly relies on monaural spectral cues. These cues are primarily derived from the direction-specific filtering of the pinnae. The contribution of specific frequency regions to the cue evaluation has not been fully clarified. To this end, we analyzed how different spectral weighting schemes contribute to the explanatory power of a sagittal-plane localization model in response to wideband, flat-spectrum stimuli. Each weighting scheme emphasized the contribution of spectral cues within well-defined frequency bands, enabling us to assess their impact on the predictions of individual patterns of localization responses. By means of Bayesian model selection, we compared five model variants representing various spectral weights. Our results indicate a preference for the weighting schemes emphasizing the contribution of frequencies above 8 kHz, suggesting that, in the auditory system, spectral cue evaluation is upweighted in that frequency region. While various potential explanations are discussed, we conclude that special attention should be put on this high-frequency region in spatial-audio applications aiming at the best localization performance.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317027"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11920987/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143659047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01DOI: 10.1177/23312165241312449
Aaron C Moberly, Liping Du, Terrin N Tamati
{"title":"Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations.","authors":"Aaron C Moberly, Liping Du, Terrin N Tamati","doi":"10.1177/23312165241312449","DOIUrl":"10.1177/23312165241312449","url":null,"abstract":"<p><p>When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241312449"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11742172/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01DOI: 10.1177/23312165251320789
Michael L Smith, Matthew B Winn
{"title":"Repairing Misperceptions of Words Early in a Sentence is More Effortful Than Repairing Later Words, Especially for Listeners With Cochlear Implants.","authors":"Michael L Smith, Matthew B Winn","doi":"10.1177/23312165251320789","DOIUrl":"10.1177/23312165251320789","url":null,"abstract":"<p><p>The process of repairing misperceptions has been identified as a contributor to effortful listening in people who use cochlear implants (CIs). The current study was designed to examine the relative cost of repairing misperceptions at earlier or later parts of a sentence that contained contextual information that could be used to infer words both predictively and retroactively. Misperceptions were enforced at specific times by replacing single words with noise. Changes in pupil dilation were analyzed to track differences in the timing and duration of effort, comparing listeners with typical hearing (TH) or with CIs. Increases in pupil dilation were time-locked to the moment of the missing word, with longer-lasting increases when the missing word was earlier in the sentence. Compared to listeners with TH, CI listeners showed elevated pupil dilation for longer periods of time after listening, suggesting a lingering effect of effort after sentence offset. When needing to mentally repair missing words, CI listeners also made more mistakes on words elsewhere in the sentence, even though these words were not masked. Changes in effort based on the position of the missing word were not evident in basic measures like peak pupil dilation and only emerged when the full-time course was analyzed, suggesting the timing analysis adds new information to our understanding of listening effort. These results demonstrate that some mistakes are more costly than others and incur different levels of mental effort to resolve the mistake, underscoring the information lost when characterizing speech perception with simple measures like percent-correct scores.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320789"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11851752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01Epub Date: 2025-05-14DOI: 10.1177/23312165251340864
Emily Buss, Margaret E Richter, Amanda D Sloop, Margaret T Dillon
{"title":"Estimating Cochlear Implant Users' Sound Localization Abilities With Two Loudspeakers.","authors":"Emily Buss, Margaret E Richter, Amanda D Sloop, Margaret T Dillon","doi":"10.1177/23312165251340864","DOIUrl":"https://doi.org/10.1177/23312165251340864","url":null,"abstract":"<p><p>The ability to tell where sound sources are in space is ecologically important for spatial awareness and communication in multisource environments. While hearing aids and cochlear implants (CIs) can support spatial hearing for some users, this ability is not routinely assessed clinically. The present study compared sound source localization for a 200-ms speech-shaped noise presented using real sources at 18° intervals from -54° to +54° azimuth and virtual sources that were simulated using amplitude panning with sources at -54° and +54°. Participants were 34 adult CI or electric-acoustic stimulation users, including individuals with single-sided deafness or aided acoustic hearing. The pattern of localization errors by participant was broadly similar for real and virtual sources, with some modest differences. For example, the root mean square (RMS) error for these two conditions was correlated at <i>r</i> = .89 (<i>p</i> < .001), with a mean RMS elevation of 3.9° for virtual sources. These results suggest that sound source localization with two-speaker amplitude panning may provide clinically useful information when testing with real sources is infeasible.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251340864"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12078988/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01DOI: 10.1177/23312165241309301
Huiyong Zhang, Brian C J Moore, Feng Jiang, Mingfang Diao, Fei Ji, Xiaodong Li, Chengshi Zheng
{"title":"Neural-WDRC: A Deep Learning Wide Dynamic Range Compression Method Combined With Controllable Noise Reduction for Hearing Aids.","authors":"Huiyong Zhang, Brian C J Moore, Feng Jiang, Mingfang Diao, Fei Ji, Xiaodong Li, Chengshi Zheng","doi":"10.1177/23312165241309301","DOIUrl":"10.1177/23312165241309301","url":null,"abstract":"<p><p>Wide dynamic range compression (WDRC) and noise reduction both play important roles in hearing aids. WDRC provides level-dependent amplification so that the level of sound produced by the hearing aid falls between the hearing threshold and the highest comfortable level of the listener, while noise reduction reduces ambient noise with the goal of improving intelligibility and listening comfort and reducing effort. In most current hearing aids, noise reduction and WDRC are implemented sequentially, but this may lead to distortion of the amplitude modulation patterns of both the speech and the noise. This paper describes a deep learning method, called Neural-WDRC, for implementing both noise reduction and WDRC, employing a two-stage low-complexity network. The network initially estimates the noise alone and the speech alone. Fast-acting compression is applied to the estimated speech and slow-acting compression to the estimated noise, but with a controllable residual noise level to help the user to perceive natural environmental sounds. Neural-WDRC is frame-based, and the output of the current frame is determined only by the current and preceding frames. Neural-WDRC was compared with conventional slow- and fast-acting compression and with signal-to-noise ratio (SNR)-aware compression using objective measures and listening tests based on normal-hearing participants listening to signals processed to simulate the effects of hearing loss and hearing-impaired participants. The objective measures demonstrated that Neural-WDRC effectively reduced negative interactions of speech and noise in highly non-stationary noise scenarios. The listening tests showed that Neural-WDRC was preferred over the other compression methods for speech in non-stationary noises.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241309301"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11770718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is Noise Exposure Associated With Impaired Extended High Frequency Hearing Despite a Normal Audiogram? A Systematic Review and Meta-Analysis.","authors":"Sajana Aryal, Monica Trevino, Hansapani Rodrigo, Srikanta Mishra","doi":"10.1177/23312165251343757","DOIUrl":"10.1177/23312165251343757","url":null,"abstract":"<p><p>Understanding the initial signature of noise-induced auditory damage remains a significant priority. Animal models suggest the cochlear base is particularly vulnerable to noise, raising the possibility that early-stage noise exposure could be linked to basal cochlear dysfunction, even when thresholds at 0.25-8 kHz are normal. To investigate this in humans, we conducted a meta-analysis following a systematic review, examining the association between noise exposure and hearing in frequencies from 9 to 20 kHz as a marker for basal cochlear dysfunction. Systematic review and meta-analysis followed PRISMA guidelines and the PICOS framework. Studies on noise exposure and hearing in the 9 to 20 kHz region in adults with clinically normal audiograms were included by searching five electronic databases (e.g., PubMed). Cohorts from 30 studies, comprising approximately 2,500 participants, were systematically reviewed. Meta-analysis was conducted on 23 studies using a random-effects model for occupational and recreational noise exposure. Analysis showed a significant positive association between occupational noise and hearing thresholds, with medium effect sizes at 9 and 11.2 kHz and large effect sizes at 10, 12, 14, and 16 kHz. However, the association with recreational noise was less consistent, with significant effects only at 12, 12.5, and 16 kHz. Egger's test indicated some publication bias, specifically at 10 kHz. Findings suggest thresholds above 8 kHz may indicate early noise exposure effects, even when lower-frequency (≤8 kHz) thresholds remain normal. Longitudinal studies incorporating noise dosimetry are crucial to establish causality and further support the clinical utility of extended high-frequency testing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251343757"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12084714/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01Epub Date: 2025-05-28DOI: 10.1177/23312165251344947
Raphael Cueille, Mathieu Lavandier
{"title":"Binaural Speech Intelligibility in Noise and Reverberation: Prediction of Group Performance for Normal-hearing and Hearing-impaired Listeners.","authors":"Raphael Cueille, Mathieu Lavandier","doi":"10.1177/23312165251344947","DOIUrl":"10.1177/23312165251344947","url":null,"abstract":"<p><p>A binaural model is proposed to predict speech intelligibility in rooms for normal-hearing (NH) and hearing-impaired listener groups, combining the advantages of two existing models. The <i>leclere2015</i> model takes binaural room impulse responses (BRIRs) as inputs and accounts for the temporal smearing of the speech by reverberation, but only works with stationary noises for NH listeners. The <i>vicente2020</i> model takes the speech and noise signals at the ears as well as the listener audiogram as inputs and accounts for modulations in the noise and hearing loss, but cannot predict the temporal smearing of the speech by reverberation. The new model takes the audiogram, BRIRs and ear signals as inputs to account for the temporal smearing of the speech, the masker modulations and hearing loss. It gave accurate predictions for speech reception thresholds measured in seven experiments. The proposed model can do predictions that neither of the two original models can make when the target speech is influenced by reverberation and the noise has modulations and/or the listeners have hearing loss. In terms of model parameters, four methods were compared to separate the early and late reverberation, and two methods were compared to account for hearing loss.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251344947"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12120292/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01Epub Date: 2025-05-30DOI: 10.1177/23312165251347131
Björn Herrmann
{"title":"Language-agnostic, Automated Assessment of Listeners' Speech Recall Using Large Language Models.","authors":"Björn Herrmann","doi":"10.1177/23312165251347131","DOIUrl":"10.1177/23312165251347131","url":null,"abstract":"<p><p>Speech-comprehension difficulties are common among older people. Standard speech tests do not fully capture such difficulties because the tests poorly resemble the context-rich, story-like nature of ongoing conversation and are typically available only in a country's dominant/official language (e.g., English), leading to inaccurate scores for native speakers of other languages. Assessments for naturalistic, story speech in multiple languages require accurate, time-efficient scoring. The current research leverages modern large language models (LLMs) in native English speakers and native speakers of 10 other languages to automate the generation of high-quality, spoken stories and scoring of speech recall in different languages. Participants listened to and freely recalled short stories (in quiet/clear and in babble noise) in their native language. Large language model text-embeddings and LLM prompt engineering with semantic similarity analyses to score speech recall revealed sensitivity to known effects of temporal order, primacy/recency, and background noise, and high similarity of recall scores across languages. The work overcomes limitations associated with simple speech materials and testing of closed native-speaker groups because recall data of varying length and details can be mapped across languages with high accuracy. The full automation of speech generation and recall scoring provides an important step toward comprehension assessments of naturalistic speech with clinical applicability.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251347131"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12125525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144192395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}