{"title":"Inner Ear Pathologies After Cochlear Implantation in Guinea Pigs: Functional, Histopathological, and Endoplasmic Reticulum Stress-Mediated Apoptosis.","authors":"Yuzhong Zhang, Qiong Wu, Shuyun Liu, Yu Zhao, Qingqing Dai, Yulian Jin, Qing Zhang","doi":"10.1097/AUD.0000000000001668","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001668","url":null,"abstract":"<p><strong>Objectives: </strong>Vestibular dysfunction is one of the most common complications of cochlear implantation (CI); however, the pathological changes and mechanisms underlying inner ear damage post-CI remain poorly understood. This study aimed to investigate the functional and histopathological changes in the cochlea and vestibule as well as endoplasmic reticulum (ER) stress-mediated apoptosis in guinea pigs after CI.</p><p><strong>Design: </strong>Auditory brainstem response, ice water test, and vestibular evoked myogenic potentials were used to assess cochlear and vestibular function in guinea pigs before and after CI. Histopathological analyses were conducted at various time points post-CI to observe morphological changes in the cochlea and vestibule, as well as the impact of ER stress on these tissues.</p><p><strong>Results: </strong>After CI, 10.7% (9/84) of the guinea pigs exhibited nystagmus and balance dysfunction. Auditory brainstem response thresholds increased significantly after CI, and air-conducted cervical and ocular vestibular evoked myogenic potential response rates decreased. The ice water test revealed a gradual reduction in nystagmus elicitation rates, along with decreased nystagmus frequency, prolonged latency, and shortened duration. Histopathological analysis of the cochlea revealed fibrous and osseous tissue formation in the scala tympani and a reduction in hair cells and spiral ganglion cells. In the vestibule, alterations included flattening the ampullary crista and disorganized sensory epithelial cells. Transmission electron microscopy revealed pathological changes including cytoplasmic vacuolization and chromatin uniformity in both cochlear and vestibular hair cells. ER stress was prominent in the cochlea, while no substantial stress response was observed in the vestibule.</p><p><strong>Conclusions: </strong>Our study highlights the various effects of CI surgery on cochlear and vestibular function and morphology in guinea pigs. ER stress-mediated apoptosis may contribute to secondary cochlear damage, whereas the vestibular system demonstrates adaptive responses that preserve cellular homeostasis. These findings provide insights into potential mechanisms underlying inner ear complications post-CI.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143992970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of Adjusted Exposure Levels Based on Different Kurtosis Adjustment Algorithms and Their Performance Comparison in Evaluating Noise-Induced Hearing Loss.","authors":"Hengjiang Liu, Meibian Zhang, Xin Sun, Weijiang Hu, Hua Zou, Jingsong Li, Wei Qiu","doi":"10.1097/AUD.0000000000001674","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001674","url":null,"abstract":"<p><strong>Objectives: </strong>Kurtosis is an essential metric in evaluating hearing loss caused by complex noise, which is calculated from the fourth central moment and the SD of the noise signal. Previous studies have shown that kurtosis-adjusted noise exposure levels can more accurately predict hearing loss caused by various types of noise. There are three potential kurtosis adjustment schemes: arithmetic averaging, geometric averaging, and segmented adjustment. This study evaluates which kurtosis adjustment scheme is most practical based on the data collected from industrial settings.</p><p><strong>Design: </strong>This study analyzed individual daily noise recordings collected from 4276 workers in manufacturing industries in China. Using 60 sec as the calculation window length, each window's noise kurtosis was calculated without overlap. Then, the arithmetic averaging (Scheme 1) and geometric averaging (Scheme 2) algorithms were used to calculate the kurtosis of the shift-long noise. Eventually, the kurtosis-adjusted 8 h working day exposure level (LAeq,8hr) was obtained using the kurtosis-adjusted formula. In Scheme 3 (i.e., segmented adjustment algorithm), kurtosis was determined per 60 sec simultaneously with A-weighted sound pressure level (LAeq,60sec). Kurtosis adjustment was applied on LAeq,60sec every 60 sec. Then, the kurtosis-adjusted LAeq,8hr was calculated by log-averaging of 480 one-minute-adjusted LAeq,60sec values. The cohort was divided into three groups according to the level of kurtosis. Which group the participants belonged to depended on the method used to calculate the shift-long noise kurtosis (i.e., arithmetic or geometric averaging). Noise-induced hearing loss was defined as noise-induced permanent threshold shift at frequencies 3, 4, and 6 kHz (NIPTS346). Predicted NIPTS346 was calculated using the ISO 1999 model or Lempert's model for each participant, and the actual NIPTS346 was determined by correcting for age and sex using non-noise-exposed Chinese workers (n = 1297). A dose-effect relationship for three kurtosis groups was established using the NIPTS346 and kurtosis-adjusted LAeq,8hr. The performance of three kurtosis adjustment algorithms was evaluated by comparing the estimated marginal mean of the difference between estimated NIPTS346 by ISO 1999 or estimated NIPTS346 by Lempert's model and actual NIPTS346 in three kurtosis groups.</p><p><strong>Results: </strong>Multiple linear regression was used to analyze the noise kurtosis classified data obtained by arithmetic and geometric averaging, and the calculated adjustment coefficients were 6.5 and 7.6, respectively. Multilayer perceptron regression was used to identify the optimal coefficients in the segmented adjustment, resulting in a coefficient value of 5.4. These three adjustment schemes were used to evaluate the performance of NIPTS346 prediction using Lempert's model. The kurtosis adjustment based on the geometric averaging algorithm (Scheme 2) and on th","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144022937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-06DOI: 10.1097/AUD.0000000000001640
Samantha Reina O'Connell, Susan R S Bissmeyer, Helena Gan, Raymond Lee Goldsworthy
{"title":"How Switching Musical Instruments Affects Pitch Discrimination for Cochlear Implant Users.","authors":"Samantha Reina O'Connell, Susan R S Bissmeyer, Helena Gan, Raymond Lee Goldsworthy","doi":"10.1097/AUD.0000000000001640","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001640","url":null,"abstract":"<p><strong>Objectives: </strong>Cochlear implant (CI) users struggle with music perception. Generally, they have poorer pitch discrimination and timbre identification than peers with normal hearing, which reduces their overall music appreciation and quality of life. This study's primary aim was to characterize how the increased difficulty of comparing pitch changes across musical instruments affects CI users and their peers with no known hearing loss. The motivation is to better understand the challenges that CI users face with polyphonic music listening. The primary hypothesis was that CI users would be more affected by instrument switching than those with no known hearing loss. The rationale was that poorer pitch and timbre perception through a CI hinders the disassociation between pitch and timbre changes needed for this demanding task.</p><p><strong>Design: </strong>Pitch discrimination was measured for piano and tenor saxophone including conditions with pitch comparisons across instruments. Adult participants included 15 CI users and 15 peers with no known hearing loss. Pitch discrimination was measured for 4 note ranges centered on A2 (110 Hz), A3 (220 Hz), A4 (440 Hz), and A5 (880 Hz). The effect of instrument switching was quantified as the change in discrimination thresholds with and without instrument switching. Analysis of variance and Spearman's rank correlation were used to test group differences and relational outcomes, respectively.</p><p><strong>Results: </strong>Although CI users had worse pitch discrimination, the additional difficulty of instrument switching did not significantly differ between groups. Discrimination thresholds in both groups were about two times worse with instrument switching than without. Further analyses, however, revealed that CI users were biased toward ranking tenor saxophone higher in pitch compared with piano, whereas those with no known hearing loss were not so biased. In addition, CI users were significantly more affected by instrument switching for the A5 note range.</p><p><strong>Conclusions: </strong>The magnitude of the effect of instrument switching on pitch resolution was similar for CI users and their peers with no known hearing loss. However, CI users were biased toward ranking tenor saxophone as higher in pitch and were significantly more affected by instrument switching for pitches near A5. These findings might reflect poorer temporal coding of fundamental frequency by CIs.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144033599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-06DOI: 10.1097/AUD.0000000000001670
Zahra Jafari, Ryan E Harari, Glenn Hole, Bryan E Kolb, Majid H Mohajerani
{"title":"Machine Learning Models Can Predict Tinnitus and Noise-Induced Hearing Loss.","authors":"Zahra Jafari, Ryan E Harari, Glenn Hole, Bryan E Kolb, Majid H Mohajerani","doi":"10.1097/AUD.0000000000001670","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001670","url":null,"abstract":"<p><strong>Objectives: </strong>Despite the extensive use of machine learning (ML) models in health sciences for outcome prediction and condition classification, their application in differentiating various types of auditory disorders remains limited. This study aimed to address this gap by evaluating the efficacy of five ML models in distinguishing (a) individuals with tinnitus from those without tinnitus and (b) noise-induced hearing loss (NIHL) from age-related hearing loss (ARHL).</p><p><strong>Design: </strong>We used data from a cross-sectional study of the Canadian population, which included audiologic and demographic information from 928 adults aged 30 to 100 years, diagnosed with either ARHL or NIHL due to long-term occupational noise exposure. The ML models applied in this study were artificial neural networks (ANNs), K-nearest neighbors, logistic regression, random forest (RF), and support vector machines.</p><p><strong>Results: </strong>The study revealed that tinnitus prevalence was over twice as high in the NIHL group compared with the ARHL group, with a frequency of 27.85% versus 8.85% in constant tinnitus and 18.55% versus 10.86% in intermittent tinnitus. In pattern recognition, significantly greater hearing loss was found at medium- and high-band frequencies in NIHL versus ARHL. In both NIHL and ARHL, individuals with tinnitus showed better pure-tone sensitivity than those without tinnitus. Among the ML models, ANN achieved the highest overall accuracy (70%), precision (60%), and F1-score (87%) for predicting tinnitus, with an area under the curve of 0.71. RF outperformed other models in differentiating NIHL from ARHL, with the highest precision (79% for NIHL, 85% for ARHL), recall (85% for NIHL), F1-score (81% for NIHL), and area under the curve (0.90).</p><p><strong>Conclusions: </strong>Our findings highlight the application of ML models, particularly ANN and RF, in advancing diagnostic precision for tinnitus and NIHL, potentially providing a framework for integrating ML techniques into clinical audiology for improved diagnostic precision. Future research is suggested to expand datasets to include diverse populations and integrate longitudinal data.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144027745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-01Epub Date: 2025-02-19DOI: 10.1097/AUD.0000000000001611
Lukas Suveg, Tanvi Thakkar, Emily Burg, Shelly P Godar, Daniel Lee, Ruth Y Litovsky
{"title":"The Relationship Between Spatial Release From Masking and Listening Effort Among Cochlear Implant Users With Single-Sided Deafness.","authors":"Lukas Suveg, Tanvi Thakkar, Emily Burg, Shelly P Godar, Daniel Lee, Ruth Y Litovsky","doi":"10.1097/AUD.0000000000001611","DOIUrl":"10.1097/AUD.0000000000001611","url":null,"abstract":"<p><strong>Objectives: </strong>To examine speech intelligibility and listening effort in a group of patients with single-sided deafness (SSD) who received a cochlear implant (CI). There is limited knowledge on how effectively SSD-CI users can integrate electric and acoustic inputs to obtain spatial hearing benefits that are important for navigating everyday noisy environments. The present study examined speech intelligibility in quiet and noise simultaneously with measuring listening effort using pupillometry in individuals with SSD before, and 1 year after, CI activation. The study was designed to examine whether spatial separation between target and interfering speech leads to improved speech understanding (spatial release from masking [SRM]), and is associated with a decreased effort (spatial release from listening effort [SRE]) measured with pupil dilation (PPD).</p><p><strong>Design: </strong>Eight listeners with adult-onset SSD participated in two visits: (1) pre-CI and (2) post-CI (1 year after activation). Target speech consisted of Electrical and Electronics Engineers sentences and masker speech consisted of AzBio sentences. Outcomes were measured in three target-masker configurations with the target fixed at 0° azimuth: (1) quiet, (2) co-located target/maskers, and (3) spatially separated (±90° azimuth) target/maskers. Listening effort was quantified as change in peak proportional PPD on the task relative to baseline dilation. Participants were tested in three listening modes: acoustic-only, CI-only, and SSD-CI (both ears). At visit 1, the acoustic-only mode was tested in all three target-masker configurations. At visit 2, the acoustic-only and CI-only modes were tested in quiet, and the SSD-CI listening mode was tested in all three target-masker configurations.</p><p><strong>Results: </strong>Speech intelligibility scores in quiet were at the ceiling for the acoustic-only mode at both visits, and in the SSD-CI listening mode at visit 2. In quiet, at visit 2, speech intelligibility scores were significantly worse in the CI-only listening modes than in all other listening modes. Comparing SSD-CI listening at visit 2 with pre-CI acoustic-only listening at visit 1, speech intelligibility scores for co-located and spatially separated configurations showed a trend toward improvement (higher scores) that was not significant. However, speech intelligibility was significantly higher in the separated compared with the co-located configuration in acoustic-only and SSD-CI listening modes, indicating SRM. PPD evoked by speech presented in quiet was significantly higher with CI-only listening at visit 2 compared with acoustic-only listening at visit 1. However, there were no significant differences between co-located and spatially separated configurations on PPD, likely due to the variability among this small group of participants. There was a negative correlation between SRM and SRE, indicating that improved speech intelligibility with spatial sep","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"624-639"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11996618/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-01Epub Date: 2025-01-20DOI: 10.1097/AUD.0000000000001623
Jihoo Kim, Kang Hyeon Lim, Euijin Kim, Seunghu Kim, Hong Jin Kim, Ye Hwan Lee, Sungkean Kim, June Choi
{"title":"Machine Learning-Based Diagnosis of Chronic Subjective Tinnitus With Altered Cognitive Function: An Event-Related Potential Study.","authors":"Jihoo Kim, Kang Hyeon Lim, Euijin Kim, Seunghu Kim, Hong Jin Kim, Ye Hwan Lee, Sungkean Kim, June Choi","doi":"10.1097/AUD.0000000000001623","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001623","url":null,"abstract":"<p><strong>Objectives: </strong>Due to the absence of objective diagnostic criteria, tinnitus diagnosis primarily relies on subjective assessments. However, its neuropathological features can be objectively quantified using electroencephalography (EEG). Despite the existing research, the pathophysiology of tinnitus remains unclear. The objective of this study was to gain a deeper comprehension of the neural mechanisms underlying tinnitus through the comparison of cognitive event-related potentials in patients with tinnitus and healthy controls (HCs). Furthermore, we explored the potential of EEG-derived features as biomarkers for tinnitus using machine learning techniques.</p><p><strong>Design: </strong>Forty-eight participants (24 patients with tinnitus and 24 HCs) underwent comprehensive audiological assessments and EEG recordings. We extracted N2 and P3 components of the midline electrodes using an auditory oddball paradigm, to explore the relationship between tinnitus and cognitive function. In addition, the current source density for N2- and P3-related regions of interest was computed. A linear support vector machine classifier was used to distinguish patients with tinnitus from HCs.</p><p><strong>Results: </strong>The P3 peak amplitudes were significantly diminished in patients with tinnitus at the AFz, Fz, Cz, and Pz electrodes, whereas the N2 peak latencies were significantly delayed at Cz electrode. Source analysis revealed notably reduced N2 activities in bilateral fusiform gyrus, bilateral cuneus, bilateral temporal gyrus, and bilateral insula of patients with tinnitus. Correlation analysis revealed significant associations between the Hospital Anxiety and Depression Scale-Depression scores and N2 source activities at left insula, right insula, and left inferior temporal gyrus. The best classification performance showed a validation accuracy of 85.42%, validation sensitivity of 87.50%, and validation specificity of 83.33% in distinguishing between patients with tinnitus and HCs by using a total of 18 features in both sensor- and source-level.</p><p><strong>Conclusions: </strong>This study demonstrated that patients with tinnitus exhibited significantly altered neural processing during the cognitive-related oddball paradigm, including lower P3 amplitudes, delayed N2 latency, and reduced source activities in specific brain regions in cognitive-related oddball paradigm. The correlations between N2 source activities and Hospital Anxiety and Depression Scale-Depression scores suggest a potential link between the physiological symptoms of tinnitus and their neural impact on patients with tinnitus. Such findings underscore the potential diagnostic relevance of N2- and P3-related features in tinnitus, while also highlighting the interplay between the temporal lobe and occipital lobe in tinnitus. Furthermore, the application of machine learning techniques has shown reliable results in distinguishing tinnitus patients from HCs, reinforcing the v","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":"46 3","pages":"770-781"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144022953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Catch-Up Saccades in Vestibulo-Ocular Reflex Deficit: Contribution of Visual Information?","authors":"Ruben Hermann, Stefano Ramat, Silvia Colnaghi, Vincent Lagadec, Clément Desoche, Denis Pelisson, Caroline Froment Tilikete","doi":"10.1097/AUD.0000000000001616","DOIUrl":"10.1097/AUD.0000000000001616","url":null,"abstract":"<p><strong>Objectives: </strong>Catch-up saccades help to compensate for loss of gaze stabilization during rapid head rotation in case of vestibular deficit. While overt saccades observed after head rotation are obviously visually guided, some of these catch-up saccades occur with shorter latency while the head is still moving, anticipating the needed final eye position. These covert saccades seem to be generated based on the integration of multisensory inputs. Vision could be one of these inputs, but the known delay for triggering visually guided saccades questions this possibility. The main objective of this study is to evaluate the potential role of visual information for controlling (triggering and guiding) the first catch-up saccades in patients suffering from bilateral vestibulopathy. To investigate this, we used head impulse test in a virtual reality setting allowing to create different visuo-vestibular mismatch conditions.</p><p><strong>Design: </strong>Twelve patients with bilateral vestibulopathy were recruited. We assessed in our patient group the validity of our virtual reality head impulse testing approach by comparing recorded eye and head movement to classical video head impulse test. Then, using the virtual reality system, we tested head impulse test under both normal and three visuo-vestibular mismatch conditions. In these mismatch conditions, the movement of the visual scene relative to the head movement was altered: decreased in amplitude by 50% (half), nullified (freeze), or inverted in direction (inverse). Recorded eye and head movements during these different conditions were then analyzed, more specifically the characteristics of the first catch-up saccade.</p><p><strong>Results: </strong>Impaired vestibulo-ocular reflex required subjects to systematically perform catch-up saccades, which could be covert or overt. The latency of the first catch-up saccade increased along with the amount of visuo-vestibular mismatch between the four conditions (i.e., from normal to half to freeze to inverse) and, consequently, the mean percentage of covert saccades decreased with increasing visual feedback error. However, the freeze and inverse conditions allowed us to reveal the existence of many saccades performed in the wrong direction relative to visual feedback. These visually discordant saccades were present in over half of the trials, they were mainly covert and their percentage was inversely correlated with residual vestibulo-ocular reflex gain.</p><p><strong>Conclusions: </strong>Visual information significantly impacts catch-up saccade latency and the relative number of covert saccades during head impulse testing in vestibular deficit. However, in more than 50% of trials involving a visuo-vestibular mismatch, catch-up saccades remained directed in the compensatory direction relative to head movement, that is, they were visually discordant. Therefore, contrary to previously published proposals, visual information does not appear to b","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"719-728"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142848518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-01Epub Date: 2024-12-23DOI: 10.1097/AUD.0000000000001617
Genoveva Hurtado, Elizabeth A Poth, Neil P Monaghan, Shaun A Nguyen, Habib G Rizk
{"title":"Isolated Corrective Saccades in the Bilateral Posterior Canal Stimulation During the Video Head Impulse Test: A Marker of Central Vestibulopathy?","authors":"Genoveva Hurtado, Elizabeth A Poth, Neil P Monaghan, Shaun A Nguyen, Habib G Rizk","doi":"10.1097/AUD.0000000000001617","DOIUrl":"10.1097/AUD.0000000000001617","url":null,"abstract":"<p><strong>Objectives: </strong>This study aimed to determine if the presence of corrective saccades during video head impulse test (vHIT) stimulation of the bilateral posterior semicircular canals (PSCs) correlated with other vestibular test results, demographics, symptoms, or diagnoses.</p><p><strong>Design: </strong>This study was a retrospective chart review where 1006 subjects' vHIT records were screened with 17 subjects meeting inclusion criteria for isolated bilateral PSC saccades.</p><p><strong>Results: </strong>Of the 1006 patients undergoing vHIT testing, only 1.7% had isolated bilateral PSC saccades. The median age of subjects was 73 years, with a range of 61 to 85 years. Statistical significance was identified between groups with abnormal PSC vHIT gain and abnormal ocular vestibular evoked myogenic potential results as well as those with 1 to 2 diagnoses.</p><p><strong>Conclusions: </strong>Our study confirms the rarity of isolated bilateral PSC vHIT saccades and as well as association with central vestibulopathy. Correlations with other vestibular test results, demographics, symptoms, or diagnoses may be strengthened with future large-scale studies. Further understanding of the clinical utility of isolated bilateral PSC vHIT saccades is needed. Patients with bilateral PSC vHIT abnormalities may benefit from a comprehensive neurological evaluation and consultation.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"729-734"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142878569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-01Epub Date: 2024-12-11DOI: 10.1097/AUD.0000000000001607
Cailey A Salagovic, Ryan A Stevenson, Blake E Butler
{"title":"Behavioral Response Modeling to Resolve Listener- and Stimulus-Related Influences on Audiovisual Speech Integration in Cochlear Implant Users.","authors":"Cailey A Salagovic, Ryan A Stevenson, Blake E Butler","doi":"10.1097/AUD.0000000000001607","DOIUrl":"10.1097/AUD.0000000000001607","url":null,"abstract":"<p><strong>Objectives: </strong>Speech intelligibility is supported by the sound of a talker's voice and visual cues related to articulatory movements. The relative contribution of auditory and visual cues to an integrated audiovisual percept varies depending on a listener's environment and sensory acuity. Cochlear implant users rely more on visual cues than those with acoustic hearing to help compensate for the fact that the auditory signal produced by their implant is poorly resolved relative to that of the typically developed cochlea. The relative weight placed on auditory and visual speech cues can be measured by presenting discordant cues across the two modalities and assessing the resulting percept (the McGurk effect). The current literature is mixed with regards to how cochlear implant users respond to McGurk stimuli; some studies suggest they report hearing syllables that represent a fusion of the auditory and visual cues more frequently than typical hearing controls while others report less frequent fusion. However, several of these studies compared implant users to younger control samples despite evidence that the likelihood and strength of audiovisual integration increase with age. Thus, the present study sought to clarify the impacts of hearing status and age on multisensory speech integration using a combination of behavioral analyses and response modeling.</p><p><strong>Design: </strong>Cochlear implant users (mean age = 58.9 years), age-matched controls (mean age = 61.5 years), and younger controls (mean age = 25.9 years) completed an online audiovisual speech task. Participants were shown and/or heard four different talkers producing syllables in auditory-alone, visual-alone, and incongruent audiovisual conditions. After each trial, participants reported the syllable they heard or saw from a list of four possible options.</p><p><strong>Results: </strong>The younger and older control groups performed similarly in both unisensory conditions. The cochlear implant users performed significantly better than either control group in the visual-alone condition. When responding to the incongruent audiovisual trials, cochlear implant users and age-matched controls experienced significantly more fusion than younger controls. When fusion was not experienced, younger controls were more likely to report the auditorily presented syllable than either implant users or age-matched controls. Conversely, implant users were more likely to report the visually presented syllable than either age-matched controls or younger controls. Modeling of the relationship between stimuli and behavioral responses revealed that younger controls had lower disparity thresholds (i.e., were less likely to experience a fused audiovisual percept) than either the implant users or older controls, while implant users had higher levels of sensory noise (i.e., more variability in the way a given stimulus pair is perceived across multiple presentations) than age-matched controls.","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"596-606"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-01Epub Date: 2025-01-31DOI: 10.1097/AUD.0000000000001622
Kaylah Lalonde, Grace Dwyer, Adam Bosen, Abby Pitts
{"title":"Impact of High- and Low-Pass Acoustic Filtering on Audiovisual Speech Redundancy and Benefit in Children.","authors":"Kaylah Lalonde, Grace Dwyer, Adam Bosen, Abby Pitts","doi":"10.1097/AUD.0000000000001622","DOIUrl":"10.1097/AUD.0000000000001622","url":null,"abstract":"<p><strong>Objectives: </strong>To investigate the influence of frequency-specific audibility on audiovisual benefit in children, this study examined the impact of high- and low-pass acoustic filtering on auditory-only and audiovisual word and sentence recognition in children with typical hearing. Previous studies show that visual speech provides greater access to consonant place of articulation than other consonant features and that low-pass filtering has a strong impact on perception on acoustic consonant place of articulation. This suggests visual speech may be particularly useful when acoustic speech is low-pass filtered because it provides complementary information about consonant place of articulation. Therefore, we hypothesized that audiovisual benefit would be greater for low-pass filtered words than high-pass filtered speech. We assessed whether this pattern of results would translate to sentence recognition.</p><p><strong>Design: </strong>Children with typical hearing completed auditory-only and audiovisual tests of consonant-vowel-consonant word and sentence recognition across conditions differing in acoustic frequency content: a low-pass filtered condition in which children could only access acoustic content below 2 kHz and a high-pass filtered condition in which children could only access acoustic content above 2 kHz. They also completed a visual-only test of consonant-vowel-consonant word recognition. We analyzed word, consonant, and keyword-in-sentence recognition and consonant feature (place, voice/manner of articulation) transmission accuracy across modalities and filter conditions using binomial general linear mixed models. To assess the degree to which visual speech is complementary versus redundant with acoustic speech, we calculated the proportion of auditory-only target and response consonant pairs that we can tell apart using only visual speech and compared these values between high-pass and low-pass filter conditions.</p><p><strong>Results: </strong>In auditory-only conditions, recognition accuracy was lower for low-pass filtered consonants and consonant features than high-pass filtered consonants and consonant features, especially consonant place of articulation. In visual-only conditions, recognition accuracy was greater for consonant place of articulation than consonant voice/manner of articulation. In addition, auditory consonants in the low-pass filtered condition were more likely to be substituted for visually distinct consonants, meaning that there was more opportunity to use visual cues to supplement missing auditory information in the low-pass filtered condition. Audiovisual benefit for isolated whole words was greater for low-pass filtered speech than high-pass filtered speech. No difference in audiovisual benefit between filter conditions was observed for phonemes, features, or words-in-sentences. Ceiling effects limit the interpretation of these nonsignificant interactions.</p><p><strong>Conclusions: </strong>F","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"735-746"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11996616/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}