Ear and HearingPub Date : 2025-05-23DOI: 10.1097/AUD.0000000000001684
Alejandra Ullauri
{"title":"Providing Hearing Care for Patients With Non-English Language Preference: Interpreters Types and Delivery Modalities.","authors":"Alejandra Ullauri","doi":"10.1097/AUD.0000000000001684","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001684","url":null,"abstract":"<p><p>In the United States, about 67 million people speak a language other than English at home, and of those, 38% report speaking English less than \"very well.\" Not speaking the majority's language has been recognized as one of the major barriers linguistic minorities encounter when navigating healthcare systems. The evidence suggests that the type of interpreter and delivery modality can increase the accuracy of the interpretation and improve the overall patient experience for non-English language preference patients. As clinical settings leverage technology to increase language access and improve utilization of hearing services, it is crucial that they consider what would be more appropriate for patients with hearing loss who also prefer to communicate in languages other than English. The objective of this perspective article is to draw attention to recent findings regarding interpreters and delivery modalities and discuss the impact of such findings in hearing care services.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144129520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-23DOI: 10.1097/AUD.0000000000001677
Srikanta K Mishra, Sajana Aryal, Chhayakanta Patro, Qian-Jie Fu
{"title":"Extended High-Frequency Hearing Loss and Suprathreshold Auditory Processing: The Moderating Role of Auditory Working Memory.","authors":"Srikanta K Mishra, Sajana Aryal, Chhayakanta Patro, Qian-Jie Fu","doi":"10.1097/AUD.0000000000001677","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001677","url":null,"abstract":"<p><strong>Objectives: </strong>Natural sounds, including speech, contain temporal fluctuations, and hearing loss influences the coding of these temporal features. However, how subclinical hearing loss may influence temporal variations remains unclear. In listeners with normal audiograms, hearing loss above 8 kHz is indicative of basal cochlear damage and may signal the onset of cochlear dysfunction. This study examined a conceptual framework to investigate the relationship between extended high-frequency hearing and suprathreshold auditory processing, particularly focusing on how cognitive factors, such as working memory, moderate these interactions.</p><p><strong>Design: </strong>Frequency modulation difference limens to slow (2 Hz) and fast (20 Hz) modulations, backward masking thresholds, and digit span measures were obtained in 44 normal-hearing listeners with varying degrees of extended high-frequency hearing thresholds.</p><p><strong>Results: </strong>Extended high-frequency hearing thresholds alone were not directly associated with frequency modulation difference limens or backward masking thresholds. However, working memory capacity-particularly as measured by the backward digit span-moderated the relationship between extended high-frequency thresholds and suprathreshold auditory performance. Among individuals with lower working memory capacity, elevated extended high-frequency thresholds were associated with reduced sensitivity to fast-rate frequency modulations and higher backward masking thresholds. It is important to note that this moderating effect was task-specific, as it was not observed for slow-rate modulations.</p><p><strong>Conclusions: </strong>The impact of elevated extended high-frequency thresholds on suprathreshold auditory processing is influenced by working memory capacity. Individuals with reduced cognitive capacity are particularly vulnerable to the perceptual effects of subclinical cochlear damage. This suggests that cognitive resources act as a compensatory mechanism, helping to mitigate the effects of subclinical deficits, especially in tasks that are temporally challenging.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144129573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-22DOI: 10.1097/AUD.0000000000001673
Charlotte Garcia, Robert P Carlyon
{"title":"Assessing Array-Type Differences in Cochlear Implant Users Using the Panoramic ECAP Method.","authors":"Charlotte Garcia, Robert P Carlyon","doi":"10.1097/AUD.0000000000001673","DOIUrl":"10.1097/AUD.0000000000001673","url":null,"abstract":"<p><strong>Objectives: </strong>Cochlear implant companies manufacture devices with different electrode array types. Some arrays have a straight geometry designed for minimal neuronal trauma, while others are precurved and designed to position the electrodes closer to the cochlear neurons. Due to their differing geometries, it is possible that the arrays are not only positioned differently inside the cochlea but also produce different patterns of the spread of current and of neural excitation. The panoramic electrically evoked compound action potential method (PECAP) provides detailed estimates of peripheral neural responsiveness and current spread for individual patients along the length of the cochlea. These estimates were assessed as a function of electrode position and array type, providing a normative dataset useful for identifying unusual patterns in individual patients.</p><p><strong>Design: </strong>ECAPs were collected from cochlear implant users using the forward-masking artifact-reduction technique for every combination of masker and probe electrode at the most comfortable level. Data were available for 91 ears using Cochlear devices, and 53 ears using Advanced Bionics devices. The Cochlear users had straight arrays (Slim Straight, CI-22 series, n = 35), or 1 of 2 precurved arrays (Contour Advance, CI-12 series, n = 43, or Slim Modiolar, CI-32 series, n = 13). Computed tomography scans were also available for 41 of them, and electrode-modiolus distances were calculated. The Advanced Bionics users had 1 of 2 straight arrays (1J, n = 9 or SlimJ, n = 20), or precurved arrays (Helix, n = 4 or Mid-Scala, n = 20). The ECAPs were submitted to the PECAP algorithm to estimate current spread and neural responsiveness along the length of the electrode array for each user. A linear mixed-effects model was used to determine whether there were statistically significant differences between different array types and/or for different electrodes, both for the PECAP estimates of current spread and neural responsiveness, as well as for the available electrode-modiolus distances. Correlations were also conducted between PECAP's estimate of current spread and the electrode-modiolus distances.</p><p><strong>Results: </strong>For Cochlear users, significant effects of array type (p = 0.001) and of electrode (p < 0.001) were found on the PECAP's current-spread estimate, as well as a significant interaction (p = 0.006). Slim Straight arrays had a wider overall current spread than both the precurved arrays (Contour Advance and Slim Modiolar). The interaction revealed the strongest effect at the apex. A significant correlation between PECAP's current-spread estimate and the electrode-modiolus distances was also found across subjects (r = 0.516, p < 0.001). No effect of array type was found on PECAP's estimate of current spread for the Advanced Bionics users (p = 0.979).</p><p><strong>Conclusions: </strong>These results suggest that for users of the Cochlear devic","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7617747/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144120705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-21DOI: 10.1097/AUD.0000000000001682
Antonios Chalimourdas, Dominique Hansen, Kenneth Verboven, Sarah Michiels
{"title":"\"The Relationship Between Physical Activity and Tinnitus Loudness and Severity: A Cross-Sectional Study\".","authors":"Antonios Chalimourdas, Dominique Hansen, Kenneth Verboven, Sarah Michiels","doi":"10.1097/AUD.0000000000001682","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001682","url":null,"abstract":"<p><strong>Introduction: </strong>Tinnitus is characterized by the perception of sound in the absence of an external stimulus and affects about 14.4% of the adult population. Psychological co-morbidities such as stress, anxiety, and depression can largely influence the patient's perception of tinnitus loudness and severity. Research has shown that these psychological conditions improve when patients are more physically active. To date, however, it is unclear if physical activity also affects tinnitus loudness and severity. Therefore, this study aims to uncover the relationship between physical activity and tinnitus loudness and severity in patients with tinnitus.</p><p><strong>Methods: </strong>In this cross-sectional study, 2751 adult patients (55.5% male, mean age: 52.3 ± 14.6 years) with tinnitus were included. All participants completed the comprehensive version of the International Physical Activity Questionnaire via an online survey. Tinnitus loudness and severity were assessed using self-reported Likert scales. Potential connections between different aspects of physical activity and tinnitus loudness and severity were explored using adjusted logistic regression models, and odds ratios (ORs) were calculated.</p><p><strong>Results: </strong>Patients who engage more in moderate (OR = 0.962) or vigorous-intensity activities (OR = 0.884) during leisure time showed significantly lower scores for tinnitus loudness. Furthermore, patients who engage more in vigorous-intensity activities during leisure time showed significantly lower scores for tinnitus severity (OR = 0.890).</p><p><strong>Conclusions: </strong>This study indicates that physical activity intensity during leisure time may attenuate tinnitus loudness and severity. Future prospective studies are needed to investigate the potential causal role of optimizing physical activity patterns to reduce tinnitus loudness and severity.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-20DOI: 10.1097/AUD.0000000000001667
Julia R Drouin, Laura N Putnam, Charles P Davis
{"title":"Malleability of the Lexical Bias Effect for Acoustically Degraded Speech.","authors":"Julia R Drouin, Laura N Putnam, Charles P Davis","doi":"10.1097/AUD.0000000000001667","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001667","url":null,"abstract":"<p><strong>Objectives: </strong>Lexical bias is a phenomenon wherein impoverished speech signals tend to be perceived in line with the word context in which they are heard. Previous research demonstrated that lexical bias may guide processing when the acoustic signal is degraded, as in the case of cochlear implant (CI) users. The goal of the present study was twofold: (1) replicate previous lab-based work demonstrating a lexical bias for acoustically degraded speech using online research methods, and (2) characterize the malleability of the lexical bias effect following a period of auditory training. We hypothesized that structured experience via auditory training would minimize reliance on lexical context during phonetic categorization for degraded speech, resulting in a reduced lexical bias.</p><p><strong>Design: </strong>In experiment 1, CI users and normal hearing (NH) listeners categorized along 2 /b/-/g/ continua (BAP-GAP; BACK-GACK). NH listeners heard each continuum in a clear and eight-channel noise-vocoded format, while CI users categorized for clear speech. In experiment 2, a separate group of NH listeners completed a same/different auditory discrimination training task with feedback and then completed phonetic categorization for eight-channel noise-vocoded /b/-/g/ continua.</p><p><strong>Results: </strong>In experiment 1, we observed a lexical bias effect in both CI users and NH listeners such that listeners more consistently categorized speech continua in line with the lexical context. In NH listeners, an enhanced lexical bias effect was observed for the eight-channel noise-vocoded speech condition, while both CI users and the clear speech condition showed a relatively weaker lexical bias. In experiment 2, structured training altered phonetic categorization and reliance on lexical context. Namely, the magnitude of the lexical bias effect decreased following a short period of auditory training relative to untrained listeners.</p><p><strong>Conclusions: </strong>Findings from experiment 1 replicate and extend previous work, suggesting that web-based methods may provide alternative routes for testing phonetic categorization in NH and hearing-impaired listeners. Moreover, findings from experiment 2 suggest that lexical bias is not a static phenomenon; rather, experience via auditory training can dynamically alter reliance on lexical context for speech categorization. These findings extend theoretical models of speech processing in terms of how top-down information is weighted for listeners adapting to acoustically degraded speech. Finally, these findings hold clinical implications for tracking changes in phonetic categorization and reliance on lexical context throughout the CI adaptation process.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144102900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inner Ear Pathologies After Cochlear Implantation in Guinea Pigs: Functional, Histopathological, and Endoplasmic Reticulum Stress-Mediated Apoptosis.","authors":"Yuzhong Zhang, Qiong Wu, Shuyun Liu, Yu Zhao, Qingqing Dai, Yulian Jin, Qing Zhang","doi":"10.1097/AUD.0000000000001668","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001668","url":null,"abstract":"<p><strong>Objectives: </strong>Vestibular dysfunction is one of the most common complications of cochlear implantation (CI); however, the pathological changes and mechanisms underlying inner ear damage post-CI remain poorly understood. This study aimed to investigate the functional and histopathological changes in the cochlea and vestibule as well as endoplasmic reticulum (ER) stress-mediated apoptosis in guinea pigs after CI.</p><p><strong>Design: </strong>Auditory brainstem response, ice water test, and vestibular evoked myogenic potentials were used to assess cochlear and vestibular function in guinea pigs before and after CI. Histopathological analyses were conducted at various time points post-CI to observe morphological changes in the cochlea and vestibule, as well as the impact of ER stress on these tissues.</p><p><strong>Results: </strong>After CI, 10.7% (9/84) of the guinea pigs exhibited nystagmus and balance dysfunction. Auditory brainstem response thresholds increased significantly after CI, and air-conducted cervical and ocular vestibular evoked myogenic potential response rates decreased. The ice water test revealed a gradual reduction in nystagmus elicitation rates, along with decreased nystagmus frequency, prolonged latency, and shortened duration. Histopathological analysis of the cochlea revealed fibrous and osseous tissue formation in the scala tympani and a reduction in hair cells and spiral ganglion cells. In the vestibule, alterations included flattening the ampullary crista and disorganized sensory epithelial cells. Transmission electron microscopy revealed pathological changes including cytoplasmic vacuolization and chromatin uniformity in both cochlear and vestibular hair cells. ER stress was prominent in the cochlea, while no substantial stress response was observed in the vestibule.</p><p><strong>Conclusions: </strong>Our study highlights the various effects of CI surgery on cochlear and vestibular function and morphology in guinea pigs. ER stress-mediated apoptosis may contribute to secondary cochlear damage, whereas the vestibular system demonstrates adaptive responses that preserve cellular homeostasis. These findings provide insights into potential mechanisms underlying inner ear complications post-CI.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143992970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of Adjusted Exposure Levels Based on Different Kurtosis Adjustment Algorithms and Their Performance Comparison in Evaluating Noise-Induced Hearing Loss.","authors":"Hengjiang Liu, Meibian Zhang, Xin Sun, Weijiang Hu, Hua Zou, Jingsong Li, Wei Qiu","doi":"10.1097/AUD.0000000000001674","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001674","url":null,"abstract":"<p><strong>Objectives: </strong>Kurtosis is an essential metric in evaluating hearing loss caused by complex noise, which is calculated from the fourth central moment and the SD of the noise signal. Previous studies have shown that kurtosis-adjusted noise exposure levels can more accurately predict hearing loss caused by various types of noise. There are three potential kurtosis adjustment schemes: arithmetic averaging, geometric averaging, and segmented adjustment. This study evaluates which kurtosis adjustment scheme is most practical based on the data collected from industrial settings.</p><p><strong>Design: </strong>This study analyzed individual daily noise recordings collected from 4276 workers in manufacturing industries in China. Using 60 sec as the calculation window length, each window's noise kurtosis was calculated without overlap. Then, the arithmetic averaging (Scheme 1) and geometric averaging (Scheme 2) algorithms were used to calculate the kurtosis of the shift-long noise. Eventually, the kurtosis-adjusted 8 h working day exposure level (LAeq,8hr) was obtained using the kurtosis-adjusted formula. In Scheme 3 (i.e., segmented adjustment algorithm), kurtosis was determined per 60 sec simultaneously with A-weighted sound pressure level (LAeq,60sec). Kurtosis adjustment was applied on LAeq,60sec every 60 sec. Then, the kurtosis-adjusted LAeq,8hr was calculated by log-averaging of 480 one-minute-adjusted LAeq,60sec values. The cohort was divided into three groups according to the level of kurtosis. Which group the participants belonged to depended on the method used to calculate the shift-long noise kurtosis (i.e., arithmetic or geometric averaging). Noise-induced hearing loss was defined as noise-induced permanent threshold shift at frequencies 3, 4, and 6 kHz (NIPTS346). Predicted NIPTS346 was calculated using the ISO 1999 model or Lempert's model for each participant, and the actual NIPTS346 was determined by correcting for age and sex using non-noise-exposed Chinese workers (n = 1297). A dose-effect relationship for three kurtosis groups was established using the NIPTS346 and kurtosis-adjusted LAeq,8hr. The performance of three kurtosis adjustment algorithms was evaluated by comparing the estimated marginal mean of the difference between estimated NIPTS346 by ISO 1999 or estimated NIPTS346 by Lempert's model and actual NIPTS346 in three kurtosis groups.</p><p><strong>Results: </strong>Multiple linear regression was used to analyze the noise kurtosis classified data obtained by arithmetic and geometric averaging, and the calculated adjustment coefficients were 6.5 and 7.6, respectively. Multilayer perceptron regression was used to identify the optimal coefficients in the segmented adjustment, resulting in a coefficient value of 5.4. These three adjustment schemes were used to evaluate the performance of NIPTS346 prediction using Lempert's model. The kurtosis adjustment based on the geometric averaging algorithm (Scheme 2) and on th","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144022937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-06DOI: 10.1097/AUD.0000000000001670
Zahra Jafari, Ryan E Harari, Glenn Hole, Bryan E Kolb, Majid H Mohajerani
{"title":"Machine Learning Models Can Predict Tinnitus and Noise-Induced Hearing Loss.","authors":"Zahra Jafari, Ryan E Harari, Glenn Hole, Bryan E Kolb, Majid H Mohajerani","doi":"10.1097/AUD.0000000000001670","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001670","url":null,"abstract":"<p><strong>Objectives: </strong>Despite the extensive use of machine learning (ML) models in health sciences for outcome prediction and condition classification, their application in differentiating various types of auditory disorders remains limited. This study aimed to address this gap by evaluating the efficacy of five ML models in distinguishing (a) individuals with tinnitus from those without tinnitus and (b) noise-induced hearing loss (NIHL) from age-related hearing loss (ARHL).</p><p><strong>Design: </strong>We used data from a cross-sectional study of the Canadian population, which included audiologic and demographic information from 928 adults aged 30 to 100 years, diagnosed with either ARHL or NIHL due to long-term occupational noise exposure. The ML models applied in this study were artificial neural networks (ANNs), K-nearest neighbors, logistic regression, random forest (RF), and support vector machines.</p><p><strong>Results: </strong>The study revealed that tinnitus prevalence was over twice as high in the NIHL group compared with the ARHL group, with a frequency of 27.85% versus 8.85% in constant tinnitus and 18.55% versus 10.86% in intermittent tinnitus. In pattern recognition, significantly greater hearing loss was found at medium- and high-band frequencies in NIHL versus ARHL. In both NIHL and ARHL, individuals with tinnitus showed better pure-tone sensitivity than those without tinnitus. Among the ML models, ANN achieved the highest overall accuracy (70%), precision (60%), and F1-score (87%) for predicting tinnitus, with an area under the curve of 0.71. RF outperformed other models in differentiating NIHL from ARHL, with the highest precision (79% for NIHL, 85% for ARHL), recall (85% for NIHL), F1-score (81% for NIHL), and area under the curve (0.90).</p><p><strong>Conclusions: </strong>Our findings highlight the application of ML models, particularly ANN and RF, in advancing diagnostic precision for tinnitus and NIHL, potentially providing a framework for integrating ML techniques into clinical audiology for improved diagnostic precision. Future research is suggested to expand datasets to include diverse populations and integrate longitudinal data.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144027745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-01Epub Date: 2025-01-20DOI: 10.1097/AUD.0000000000001623
Jihoo Kim, Kang Hyeon Lim, Euijin Kim, Seunghu Kim, Hong Jin Kim, Ye Hwan Lee, Sungkean Kim, June Choi
{"title":"Machine Learning-Based Diagnosis of Chronic Subjective Tinnitus With Altered Cognitive Function: An Event-Related Potential Study.","authors":"Jihoo Kim, Kang Hyeon Lim, Euijin Kim, Seunghu Kim, Hong Jin Kim, Ye Hwan Lee, Sungkean Kim, June Choi","doi":"10.1097/AUD.0000000000001623","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001623","url":null,"abstract":"<p><strong>Objectives: </strong>Due to the absence of objective diagnostic criteria, tinnitus diagnosis primarily relies on subjective assessments. However, its neuropathological features can be objectively quantified using electroencephalography (EEG). Despite the existing research, the pathophysiology of tinnitus remains unclear. The objective of this study was to gain a deeper comprehension of the neural mechanisms underlying tinnitus through the comparison of cognitive event-related potentials in patients with tinnitus and healthy controls (HCs). Furthermore, we explored the potential of EEG-derived features as biomarkers for tinnitus using machine learning techniques.</p><p><strong>Design: </strong>Forty-eight participants (24 patients with tinnitus and 24 HCs) underwent comprehensive audiological assessments and EEG recordings. We extracted N2 and P3 components of the midline electrodes using an auditory oddball paradigm, to explore the relationship between tinnitus and cognitive function. In addition, the current source density for N2- and P3-related regions of interest was computed. A linear support vector machine classifier was used to distinguish patients with tinnitus from HCs.</p><p><strong>Results: </strong>The P3 peak amplitudes were significantly diminished in patients with tinnitus at the AFz, Fz, Cz, and Pz electrodes, whereas the N2 peak latencies were significantly delayed at Cz electrode. Source analysis revealed notably reduced N2 activities in bilateral fusiform gyrus, bilateral cuneus, bilateral temporal gyrus, and bilateral insula of patients with tinnitus. Correlation analysis revealed significant associations between the Hospital Anxiety and Depression Scale-Depression scores and N2 source activities at left insula, right insula, and left inferior temporal gyrus. The best classification performance showed a validation accuracy of 85.42%, validation sensitivity of 87.50%, and validation specificity of 83.33% in distinguishing between patients with tinnitus and HCs by using a total of 18 features in both sensor- and source-level.</p><p><strong>Conclusions: </strong>This study demonstrated that patients with tinnitus exhibited significantly altered neural processing during the cognitive-related oddball paradigm, including lower P3 amplitudes, delayed N2 latency, and reduced source activities in specific brain regions in cognitive-related oddball paradigm. The correlations between N2 source activities and Hospital Anxiety and Depression Scale-Depression scores suggest a potential link between the physiological symptoms of tinnitus and their neural impact on patients with tinnitus. Such findings underscore the potential diagnostic relevance of N2- and P3-related features in tinnitus, while also highlighting the interplay between the temporal lobe and occipital lobe in tinnitus. Furthermore, the application of machine learning techniques has shown reliable results in distinguishing tinnitus patients from HCs, reinforcing the v","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":"46 3","pages":"770-781"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144022953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-01Epub Date: 2025-02-19DOI: 10.1097/AUD.0000000000001611
Lukas Suveg, Tanvi Thakkar, Emily Burg, Shelly P Godar, Daniel Lee, Ruth Y Litovsky
{"title":"The Relationship Between Spatial Release From Masking and Listening Effort Among Cochlear Implant Users With Single-Sided Deafness.","authors":"Lukas Suveg, Tanvi Thakkar, Emily Burg, Shelly P Godar, Daniel Lee, Ruth Y Litovsky","doi":"10.1097/AUD.0000000000001611","DOIUrl":"10.1097/AUD.0000000000001611","url":null,"abstract":"<p><strong>Objectives: </strong>To examine speech intelligibility and listening effort in a group of patients with single-sided deafness (SSD) who received a cochlear implant (CI). There is limited knowledge on how effectively SSD-CI users can integrate electric and acoustic inputs to obtain spatial hearing benefits that are important for navigating everyday noisy environments. The present study examined speech intelligibility in quiet and noise simultaneously with measuring listening effort using pupillometry in individuals with SSD before, and 1 year after, CI activation. The study was designed to examine whether spatial separation between target and interfering speech leads to improved speech understanding (spatial release from masking [SRM]), and is associated with a decreased effort (spatial release from listening effort [SRE]) measured with pupil dilation (PPD).</p><p><strong>Design: </strong>Eight listeners with adult-onset SSD participated in two visits: (1) pre-CI and (2) post-CI (1 year after activation). Target speech consisted of Electrical and Electronics Engineers sentences and masker speech consisted of AzBio sentences. Outcomes were measured in three target-masker configurations with the target fixed at 0° azimuth: (1) quiet, (2) co-located target/maskers, and (3) spatially separated (±90° azimuth) target/maskers. Listening effort was quantified as change in peak proportional PPD on the task relative to baseline dilation. Participants were tested in three listening modes: acoustic-only, CI-only, and SSD-CI (both ears). At visit 1, the acoustic-only mode was tested in all three target-masker configurations. At visit 2, the acoustic-only and CI-only modes were tested in quiet, and the SSD-CI listening mode was tested in all three target-masker configurations.</p><p><strong>Results: </strong>Speech intelligibility scores in quiet were at the ceiling for the acoustic-only mode at both visits, and in the SSD-CI listening mode at visit 2. In quiet, at visit 2, speech intelligibility scores were significantly worse in the CI-only listening modes than in all other listening modes. Comparing SSD-CI listening at visit 2 with pre-CI acoustic-only listening at visit 1, speech intelligibility scores for co-located and spatially separated configurations showed a trend toward improvement (higher scores) that was not significant. However, speech intelligibility was significantly higher in the separated compared with the co-located configuration in acoustic-only and SSD-CI listening modes, indicating SRM. PPD evoked by speech presented in quiet was significantly higher with CI-only listening at visit 2 compared with acoustic-only listening at visit 1. However, there were no significant differences between co-located and spatially separated configurations on PPD, likely due to the variability among this small group of participants. There was a negative correlation between SRM and SRE, indicating that improved speech intelligibility with spatial sep","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"624-639"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11996618/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}