{"title":"Prevalence and Risk Factors of Frailty in Patients With Vestibular Hypofunction.","authors":"Tomohiko Kamo, Hirofumi Ogihara, Ryozo Tanaka, Takumi Kato, Masato Azami, Masao Noda, Reiko Tsunoda, Hiroaki Fushiki","doi":"10.1097/AUD.0000000000001697","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001697","url":null,"abstract":"<p><strong>Objective: </strong>The aim of this study was to investigate the prevalence of frailty and the factors associated with frailty in patients with vestibular hypofunction.</p><p><strong>Design: </strong>This observational study included 185 individuals with dizziness aged 40 and above who suffered from chronic vestibular hypofunction. We defined frailty using the diagnostic algorithm by the revised Japanese version of the Cardiovascular Health Study criteria. Frailty, prefrailty, and robust were defined as including 3 to 5, 1 to 2, and 0 points, respectively. For comparison, we also assessed the prevalence of frailty in community-dwelling adults over 40 years old (control group, n = 203).</p><p><strong>Results: </strong>The average ages for the groups with vestibular hypofunction and the control were 72.0 ± 10.1 and 69.8 ± 8.2 years, respectively. In the vestibular hypofunction group (185 patients), 32 were identified as frail (17.3%) and 103 as prefrail (55.7%). Of the patients with vestibular hypofunction aged 65 years or older (n = 151), 31 (20.5%) were frail and 80 (53.0%) were prefrail. In the control group, consisting of 203 community-dwelling adults, 15 were identified as frail (7.0%) and 108 as prefrail (54.0%). Among patients with vestibular hypofunction, 64 (34.6%) exhibited low gait speed, the most common of the frailty components. Age, female, Hospital Anxiety and Depression Scale-Depression subscale, and Dizziness Handicap Inventory were associated with frailty and prefrailty in patients with vestibular hypofunction, after adjustment for confounding factors.</p><p><strong>Conclusions: </strong>The present study demonstrates that the prevalence of frailty in patients with vestibular hypofunction is higher than that in community-dwelling adults. Therefore, evaluating frailty in patients with vestibular hypofunction is crucial for identifying those at higher risk and implementing early interventions such as dietary guidance and exercises to strengthen the lower body along with vestibular rehabilitation.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144546259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-02DOI: 10.1097/AUD.0000000000001692
Hidde Pielage, Bethany Plain, Sjors van de Ven, Gabrielle H Saunders, Niek J Versfeld, Sophia E Kramer, Adriana A Zekveld
{"title":"Using Pupillometry in Virtual Reality as a Tool for Speech-in-Noise Research.","authors":"Hidde Pielage, Bethany Plain, Sjors van de Ven, Gabrielle H Saunders, Niek J Versfeld, Sophia E Kramer, Adriana A Zekveld","doi":"10.1097/AUD.0000000000001692","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001692","url":null,"abstract":"<p><strong>Objectives: </strong>Virtual reality (VR) could be used in speech perception research to reduce the gap between the laboratory and real life. However, the suitability of using VR head-mounted displays (HMDs) warrants investigation, especially when pupillometric measurements are required. The present study aimed to assess if pupil measurements taken within an HMD would be sensitive to changes in listening effort related to a speech perception task. Task load of a VR speech-in-noise task was manipulated while pupil size was recorded within an HMD. The present study also assessed if VR could be used to simulate the copresence of other persons during listening, which is often an important aspect of real-life listening. To this end, participants completed the speech-in-noise task both in the copresence of virtual persons (agents) and while the virtual persons were replaced with visual distractors.</p><p><strong>Design: </strong>Thirty-three normal-hearing participants were provided with a VR-HMD and completed a speech-in-noise task in a virtual environment while their pupil size was measured. Participants were simultaneously presented with two sentences-one to each ear-which were masked by stationary noise that was 3 dB louder (-3 dB signal to noise ratio) than the sentences. Task load was manipulated by having participants attend to and repeat either one sentence or both sentences. Participants did the task both while accompanied by two virtual agents who provided positive (head nodding) and negative (head shaking) feedback on some trials, or in the presence of two visual distractors that did not provide feedback (control condition). We assessed the effect of task load and copresence on performance, measures of pupil size (baseline pupil size and peak pupil dilation), and several subjective ratings. Participants also completed two questionnaires related to their experience of the virtual environment.</p><p><strong>Results: </strong>Task load significantly affected baseline pupil size, peak pupil dilation, and subjective ratings of effort, task difficulty, and performance. However, the manipulation of virtual copresence did not affect any of the outcome measures. The effect of task load on performance could not be assessed, as single-sentence conditions often resulted in a ceiling score (100% correct). An exploratory analysis provided some indication that trials following positive feedback from the agents (as compared to no feedback) showed increased baseline pupil sizes. Scores on the questionnaires indicated that participants were not highly immersed in the virtual environment, possibly explaining why they were largely unaffected by the virtual copresence manipulation.</p><p><strong>Conclusions: </strong>The finding that baseline pupil size and peak pupil dilation were sensitive to the manipulation of task load suggests that HMD pupillometry is sensitive to changes in arousal and effort. This supports the idea that VR-HMDs can be successf","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144546260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-02DOI: 10.1097/AUD.0000000000001695
Dumini de Silva, Piers Dawes, Mansoureh Nickbakht, Asaduzzaman Khan, John Newall
{"title":"Hearing Loss in Children From Culturally and Linguistically Diverse Communities in Australia.","authors":"Dumini de Silva, Piers Dawes, Mansoureh Nickbakht, Asaduzzaman Khan, John Newall","doi":"10.1097/AUD.0000000000001695","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001695","url":null,"abstract":"<p><strong>Objectives: </strong>Research from Europe and the USA suggest higher rates of hearing loss among children from diverse racial or ethnic backgrounds, but there is a lack of data in the Australian context. About one in four Australians has a diverse cultural and linguistic background, so there is a compelling need to investigate inequalities in hearing among Australian children from these communities and the factors that contribute to any inequalities. Objectives of this study were (1) to examine the prevalence of hearing loss in children from culturally and linguistically diverse versus majority backgrounds, and (2) to examine the demographic, socioeconomic, health, and migration-related factors associated with hearing loss in children from diverse cultural and linguistic communities.</p><p><strong>Design: </strong>A population-based cross-sectional dataset of 11- to 12-year-old children, collected in 2015 from the Child Health Checkpoint sub-set of the Longitudinal Study of Australian Children was analyzed. Children from diverse cultural and linguistic communities were identified based on primary caregivers speaking a language other than English at home. A total of 145 children from diverse cultural and linguistic backgrounds and 1324 children from ethnic majority background who completed pure-tone audiometry were included in the analysis. Logistic regression was used to estimate correlates of hearing loss.</p><p><strong>Results: </strong>A higher prevalence of any hearing loss (>15 dB HL in either ear) was found in children from diverse cultural and linguistic (38.3%) compared with ethnic majority (21.1%) communities. Of the 49 children from culturally and linguistically diverse backgrounds with hearing loss, 58.0% had unilateral hearing loss. Most hearing loss (85.7%) was slight (16 to 25 dB HL). After adjusting for sociodemographic factors, family history of hearing loss, and presence of ear infections, children from diverse cultural and linguistic communities had 58% higher odds of hearing loss compared to their ethnic majority counterparts (odds ratio [OR], 1.58: 95% confidence interval [CI], 1.01-2.46). Primary caregiver self-reported lower English language proficiency (OR, 3.54; 95% CI, 1.58-7.92) was associated with higher likelihood of hearing loss, while longer duration of residence in Australia was associated with reduced odds of hearing loss (OR, 0.97; 95% CI, 0.94-0.99) among children from diverse cultural and linguistic backgrounds.</p><p><strong>Conclusions: </strong>Hearing loss was more common among children from culturally and linguistically diverse families compared with their ethnic majority peers. Future research should focus on identifying causal factors to inform hearing loss prevention strategies, and systematic screening for hearing loss targeting diverse cultural and linguistic communities to address hearing health inequalities.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144546258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-02-10DOI: 10.1097/AUD.0000000000001631
Mira Van Wilderode, Nathan Van Humbeeck, Ralf T Krampe, Astrid van Wieringen
{"title":"Enhancing Speech Perception in Noise Through Home-Based Competing Talker Training.","authors":"Mira Van Wilderode, Nathan Van Humbeeck, Ralf T Krampe, Astrid van Wieringen","doi":"10.1097/AUD.0000000000001631","DOIUrl":"10.1097/AUD.0000000000001631","url":null,"abstract":"<p><strong>Objectives: </strong>This study aimed to evaluate the effectiveness of a competing talker training paradigm (2TT-Flemish). The primary objectives were the assessment of on-task learning and the transfer to untrained tasks.</p><p><strong>Design: </strong>A total of 60 participants (54-84 years, mean age = 69.4) with speech-in-noise problems participated in the study. The study used a randomized controlled design with three groups: an immediate training group, a delayed training group, and an active control group. The immediate training group trained from the very beginning, while delayed training started after 4 weeks. The active control group listened to audiobooks for the first 4 weeks. All participants underwent 4 weeks of competing talker training. Outcome measures included speech perception in noise, analytical tasks (modulation detection and phoneme perception in noise), and inhibitory control. In addition, a listening-posture dual task assessed whether training freed up cognitive resources for a concurrently performed task. Finally, we assessed whether training induced self-reported benefits regarding hearing, listening effort, communication strategies, emotional consequences, knowledge, and acceptance of hearing loss. Outcome measures were assessed every 4 weeks over a 12-week period. The present study aimed to investigate the effectiveness of competing talker training in a stratified randomized controlled trial.</p><p><strong>Results: </strong>Overall compliance to the training was good and increased with age. We observed on-task improvements during the 4 weeks of training in all groups. Results showed generalization toward speech-in-noise perception, persisting for at least 4 weeks after the end of training. No transfer toward more analytical tasks or inhibitory control was observed. Initial dual-task costs in postural control were reliably reduced after competing talker training suggesting a link between improved listening skills and cognitive resource allocation in multitask settings. Our results show that listeners report better knowledge about their hearing after training.</p><p><strong>Conclusions: </strong>After training with the 2TT-Flemish, results showed on-task improvements and generalization toward speech-in-noise. Improvements did not generalize toward basic analytical tasks. Results suggest that competing talker training enables listeners to free up cognitive resources, which can be used for another concurrent task.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"856-870"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143384155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-02-19DOI: 10.1097/AUD.0000000000001635
Cheng-Hung Hsin, Chia-Ying Lee, Yu Tsao
{"title":"Exploring N400 Predictability Effects During Sustained Speech Comprehension: From Listening-Related Fatigue to Speech Enhancement Evaluation.","authors":"Cheng-Hung Hsin, Chia-Ying Lee, Yu Tsao","doi":"10.1097/AUD.0000000000001635","DOIUrl":"10.1097/AUD.0000000000001635","url":null,"abstract":"<p><strong>Objectives: </strong>This study investigated the predictability effect on the N400 as an objective measure of listening-related fatigue during speech comprehension by: (1) examining how its characteristics (amplitude, latency, and topographic distribution) changed over time under clear versus noisy conditions to assess its utility as a marker for listening-related fatigue, and (2) evaluating whether these N400 parameters could assess the effectiveness of speech enhancement systems.</p><p><strong>Design: </strong>Two event-related potential experiments were conducted on 140 young adults (aged 20 to 30) assigned to four age-matched groups. Using a between-subjects design for listening conditions, participants comprehended spoken sentences ending in high- or low-predictability words while their brain activity was recorded using electroencephalography. Experiment 1 compared the predictability effect on the N400 in clear and noise-masked conditions, while experiment 2 examined this effect under two enhanced conditions (denoised using the Transformer- and minimum mean square error-based speech enhancement models). Electroencephalography data were divided into two blocks to analyze the changes in the predictability effect on the N400 over time, including amplitude, latency, and topographic distributions.</p><p><strong>Results: </strong>Experiment 1 compared N400 effects across blocks under different clarity conditions. Clear speech in block 2 elicited a more anteriorly distributed N400 effect without reduction or delay compared with block 1. Noisy speech in block 2 showed a reduced, delayed, and posteriorly distributed effect compared with block 1. Experiment 2 examined N400 effects during enhanced speech processing. Transformer-enhanced speech in block 1 demonstrated significantly increased N400 effect amplitude compared to noisy speech. However, both enhancement methods showed delayed N400 effects in block 2.</p><p><strong>Conclusions: </strong>This study suggests that temporal changes in the N400 predictability effect might serve as objective markers of sustained speech processing under different clarity conditions. During clear speech comprehension, listeners appear to maintain efficient semantic processing through additional resource recruitment over time, while noisy speech leads to reduced processing efficiency. When applied to enhanced speech, these N400 patterns reveal both the immediate benefits of speech enhancement for semantic processing and potential limitations in supporting sustained listening. These findings demonstrate the potential utility of the N400 predictability effect for understanding sustained listening demands and evaluating speech enhancement effectiveness.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"922-940"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-02-21DOI: 10.1097/AUD.0000000000001648
Bridget McNamara, Douglas S Brungart, Rebecca E Bieber, Ian Phillips, Alyssa J Davidson, Sandra Gordon-Salant
{"title":"Speech and Non-Speech Auditory Task Performance by Non-Native English Speakers.","authors":"Bridget McNamara, Douglas S Brungart, Rebecca E Bieber, Ian Phillips, Alyssa J Davidson, Sandra Gordon-Salant","doi":"10.1097/AUD.0000000000001648","DOIUrl":"10.1097/AUD.0000000000001648","url":null,"abstract":"<p><strong>Objectives: </strong>The goal of this study was to determine if performance on speech and non-speech clinical measures of auditory perception differs between two groups of adults: self-identified native speakers of English and non-native speakers of English who speak Spanish as a first language. The overall objective was to establish whether auditory perception tests developed for native English speakers are appropriate for bilingual Spanish-speaking adults who self-identify as non-native speakers of English. A secondary objective was to determine whether relative performance on English- and Spanish-language versions of a closed-set speech perception in noise task could accurately predict native-like performance on a battery of English language-dependent tests of auditory perception.</p><p><strong>Design: </strong>Participants were young, normal-hearing adults who self-identified as either native speakers of American English (n = 50) or as non-native speakers of American English (NNE; n = 25) who spoke Spanish as their first language. Participants completed a battery of perceptual tests, including speech tests (e.g., Quick Speech-in-Noise, time-compressed reverberant Quick Speech-in-Noise, etc.) and non-speech tests (Gaps in Noise, Frequency Pattern test, Duration Pattern test, Masking Level Difference). The English version of the Oldenburg Sentence test (OLSA) was administered to both groups; NNE participants also completed the Spanish version of the OLSA.</p><p><strong>Results: </strong>Analyses indicate that the native speakers of the American English group performed significantly better than the NNE group on all speech-based tests and on the two pattern recognition tests. There was no difference between groups on the remaining non-speech tests. For the NNE group, a difference of more than 2 SD on group-normalized scores for the English and Spanish OLSA accurately predicted poorer than normal performance on two or more tests of auditory perception with a language-dependent component either in the instructions or the stimuli.</p><p><strong>Conclusions: </strong>The results indicate that a number of English-based tests designed to assess auditory perception may be inappropriate for some Spanish-English bilingual adults. That is, some bilingual adults may perform worse than expected on tests that involve perceiving spoken English, in part because of linguistic differences, and not because of unusually poor auditory perception. The results also support the use of preliminary speech-in-noise screening tests in each of a bilingual patient's languages to establish if auditory perception tests in English are appropriate for a given individual. If a non-native English speaker's screening performance is worse in English than in the native language, one suggested strategy is to select auditory perceptual tests that are impacted minimally or not at all by linguistic differences.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1056-1068"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143470006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-01-29DOI: 10.1097/AUD.0000000000001638
Jonathan T Mo, Davis S Chong, Cynthia Sun, Nikita Mohapatra, Nicole T Jiam
{"title":"Machine-Learning Predictions of Cochlear Implant Functional Outcomes: A Systematic Review.","authors":"Jonathan T Mo, Davis S Chong, Cynthia Sun, Nikita Mohapatra, Nicole T Jiam","doi":"10.1097/AUD.0000000000001638","DOIUrl":"10.1097/AUD.0000000000001638","url":null,"abstract":"<p><strong>Objectives: </strong>Cochlear implant (CI) user functional outcomes are challenging to predict because of the variability in individual anatomy, neural health, CI device characteristics, and linguistic and listening experience. Machine learning (ML) techniques are uniquely poised for this predictive challenge because they can analyze nonlinear interactions using large amounts of multidimensional data. The objective of this article is to systematically review the literature regarding ML models that predict functional CI outcomes, defined as sound perception and production. We analyze the potential strengths and weaknesses of various ML models, identify important features for favorable outcomes, and suggest potential future directions of ML applications for CI-related clinical and research purposes.</p><p><strong>Design: </strong>We conducted a systematic literature search with Web of Science, Scopus, MEDLINE, EMBASE, CENTRAL, and CINAHL from the date of inception through September 2024. We included studies with ML models predicting a CI functional outcome, defined as those pertaining to sound perception and production, and excluded simulation studies and those involving patients without CIs. Using Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, we extracted participant population, CI characteristics, ML model, and performance data. Sixteen studies examining 5058 pediatric and adult CI users (range: 4 to 2489) were included from an initial 1442 publications.</p><p><strong>Results: </strong>Studies predicted heterogeneous outcome measures pertaining to sound production (5 studies), sound perception (12 studies), and language (2 studies). ML models use a variety of prediction features, including demographic, audiological, imaging, and subjective measures. Some studies highlighted predictors beyond traditional CI audiometric outcomes, such as anatomical and imaging characteristics (e.g., vestibulocochlear nerve area, brain regions unaffected by auditory deprivation), health system factors (e.g., wait time to referral), and patient-reported measures (e.g., dizziness and tinnitus questionnaires). Used ML models were tree-based, kernel-based, instance-based, probabilistic, or neural networks, with validation and test methods most commonly being k-fold cross-validation and train-test split. Various statistical measures were used to evaluate model performance, however, for studies reporting accuracy, the best-performing models for each study ranged from 71.0% to 98.83%.</p><p><strong>Conclusions: </strong>ML models demonstrate high predictive performance and illuminate factors that contribute to CI user functional outcomes. While many models showed favorable evaluation statistics, the majority were not adequately reported with regard to dataset characteristics, model creation, and validation. Furthermore, the extent of overfitting in these models is unclear and will likely result in poor generalization to new data.","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"952-962"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-06-16DOI: 10.1097/AUD.0000000000001640
Samantha Reina O'Connell, Susan R S Bissmeyer, Helena Gan, Raymond Lee Goldsworthy
{"title":"How Switching Musical Instruments Affects Pitch Discrimination for Cochlear Implant Users.","authors":"Samantha Reina O'Connell, Susan R S Bissmeyer, Helena Gan, Raymond Lee Goldsworthy","doi":"10.1097/AUD.0000000000001640","DOIUrl":"10.1097/AUD.0000000000001640","url":null,"abstract":"<p><strong>Objectives: </strong>Cochlear implant (CI) users struggle with music perception. Generally, they have poorer pitch discrimination and timbre identification than peers with normal hearing, which reduces their overall music appreciation and quality of life. This study's primary aim was to characterize how the increased difficulty of comparing pitch changes across musical instruments affects CI users and their peers with no known hearing loss. The motivation is to better understand the challenges that CI users face with polyphonic music listening. The primary hypothesis was that CI users would be more affected by instrument switching than those with no known hearing loss. The rationale was that poorer pitch and timbre perception through a CI hinders the disassociation between pitch and timbre changes needed for this demanding task.</p><p><strong>Design: </strong>Pitch discrimination was measured for piano and tenor saxophone including conditions with pitch comparisons across instruments. Adult participants included 15 CI users and 15 peers with no known hearing loss. Pitch discrimination was measured for 4 note ranges centered on A2 (110 Hz), A3 (220 Hz), A4 (440 Hz), and A5 (880 Hz). The effect of instrument switching was quantified as the change in discrimination thresholds with and without instrument switching. Analysis of variance and Spearman's rank correlation were used to test group differences and relational outcomes, respectively.</p><p><strong>Results: </strong>Although CI users had worse pitch discrimination, the additional difficulty of instrument switching did not significantly differ between groups. Discrimination thresholds in both groups were about two times worse with instrument switching than without. Further analyses, however, revealed that CI users were biased toward ranking tenor saxophone higher in pitch compared with piano, whereas those with no known hearing loss were not so biased. In addition, CI users were significantly more affected by instrument switching for the A5 note range.</p><p><strong>Conclusions: </strong>The magnitude of the effect of instrument switching on pitch resolution was similar for CI users and their peers with no known hearing loss. However, CI users were biased toward ranking tenor saxophone as higher in pitch and were significantly more affected by instrument switching for pitches near A5. These findings might reflect poorer temporal coding of fundamental frequency by CIs.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"997-1008"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144033599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-01-24DOI: 10.1097/AUD.0000000000001634
Sarah Meehan, Marc P van der Schroeff, Marloes L Adank, Wichor M Bramer, Jantien L Vroegop
{"title":"The Performance of the Acoustic Change Complex Versus Psychophysical Behavioral Measures: A Systematic Review of Measurements in Adults.","authors":"Sarah Meehan, Marc P van der Schroeff, Marloes L Adank, Wichor M Bramer, Jantien L Vroegop","doi":"10.1097/AUD.0000000000001634","DOIUrl":"10.1097/AUD.0000000000001634","url":null,"abstract":"<p><strong>Objectives: </strong>The acoustic change complex (ACC) is a cortical auditory evoked potential that shows promise as an objective test of the neural capacity for speech and sound discrimination, particularly for difficult-to-test populations, for example, cognitively impaired adults. There is uncertainty, however, surrounding the performance of the ACC with behavioral measures. The objective of this study was to systematically review the literature, focusing on adult studies, to investigate the relationship between ACC responses and behavioral psychophysical measures.</p><p><strong>Design: </strong>Original peer-reviewed articles conducting performance comparisons between ACCs and behavioral measures in adults were identified through systematic searches. The review was conducted using Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines for reporting, and the methodological quality of the included articles was assessed.</p><p><strong>Results: </strong>A total of 66 studies were retrieved that conducted adult ACC measurements, of which 27 studies included performance comparisons. Meta-analysis revealed a total of 41 significant correlations between ACC responses (amplitudes, latencies, and thresholds) and behavioral measures of speech perception (2 weak, 28 moderate, and 11 strong correlations), and 12 significant moderate/strong correlations were identified with behavioral measures of frequency discrimination.</p><p><strong>Conclusions: </strong>This systematic review finds that ACC responses are associated with speech perception and frequency discrimination, in addition to other types of sound discrimination. The choice of evoking stimuli, ACC outcome measure, and behavioral measure used may influence the strength and visibility of potential correlations between the objective (ACC) and behavioral measures. The performance of the ACC technique highlighted in this review suggests that this tool may serve as an alternative measure of auditory discrimination when corresponding behavioral measures prove challenging or unfeasible.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"839-850"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143029444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-02-11DOI: 10.1097/AUD.0000000000001641
Elad Sagi, Mario A Svirsky
{"title":"A Level-Adjusted Cochlear Frequency-to-Place Map for Estimating Tonotopic Frequency Mismatch With a Cochlear Implant.","authors":"Elad Sagi, Mario A Svirsky","doi":"10.1097/AUD.0000000000001641","DOIUrl":"10.1097/AUD.0000000000001641","url":null,"abstract":"<p><strong>Objectives: </strong>To provide a level-adjusted correction to the current standard relating anatomical cochlear place to characteristic frequency (CF) in humans, and to re-evaluate anatomical frequency mismatch in cochlear implant (CI recipients considering this correction. It is proposed that a level-adjusted place-frequency function may represent a more relevant tonotopic benchmark for CIs in comparison to the current standard.</p><p><strong>Design: </strong>The present analytical study compiled data from 15 previous animal studies that reported isointensity responses from cochlear structures at different stimulation levels. Extracted outcome measures were CFs and centroid-based best frequencies at 70 dB SPL input from 47 specimens spanning a broad range of cochlear locations. A simple relationship was used to transform these measures to human estimates of characteristic and best frequencies, and nonlinear regression was applied to these estimates to determine how the standard human place-frequency function should be adjusted to reflect best frequency rather than CF. The proposed level-adjusted correction was then compared with average place-frequency positions of commonly used CI devices when programmed with clinical settings.</p><p><strong>Results: </strong>The present study showed that the best frequency at 70 dB SPL (BF70) tends to shift away from CF. The amount of shift was statistically significant (signed-rank test z = 5.143, p < 0.001), but the amount and direction of shift depended on cochlear location. At cochlear locations up to 600° from the base, BF70 shifted downward in frequency relative to CF by about 4 semitones on average. Beyond 600° from the base, BF70 shifted upward in frequency relative to CF by about 6 semitones on average. In terms of spread (90% prediction interval), the amount of shift between CF and BF70 varied from relatively no shift to nearly an octave of shift. With the new level-adjusted place-frequency function, the amount of anatomical frequency mismatch for devices programmed with standard-of-care settings is less extreme than originally thought and may be nonexistent for all but the most apical electrodes.</p><p><strong>Conclusions: </strong>The present study validates the current standard for relating cochlear place to CF, and introduces a level-adjusted correction for how best frequency shifts away from CF at moderately loud stimulation levels. This correction may represent a more relevant tonotopic reference for CIs. To the extent that it does, its implementation may potentially enhance perceptual accommodation and speech understanding in CI users, thereby improving CI outcomes and contributing to advancements in the programming and clinical management of CIs.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"963-975"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12170178/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143392578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}