Ear and HearingPub Date : 2025-01-01Epub Date: 2024-08-06DOI: 10.1097/AUD.0000000000001572
Jianfen Luo, Ruijie Wang, Kaifan Xu, Xiuhua Chao, Yi Zheng, Fangxia Hu, Xianqi Liu, Andrew E Vandali, Haibo Wang, Lei Xu
{"title":"Outcomes Using the Optimized Pitch and Language Strategy Versus the Advanced Combination Encoder Strategy in Mandarin-Speaking Cochlear Implant Recipients.","authors":"Jianfen Luo, Ruijie Wang, Kaifan Xu, Xiuhua Chao, Yi Zheng, Fangxia Hu, Xianqi Liu, Andrew E Vandali, Haibo Wang, Lei Xu","doi":"10.1097/AUD.0000000000001572","DOIUrl":"10.1097/AUD.0000000000001572","url":null,"abstract":"<p><strong>Objectives: </strong>The experimental Optimized Pitch and Language (OPAL) strategy enhances coding of fundamental frequency (F0) information in the temporal envelope of electrical signals delivered to channels of a cochlear implant (CI). Previous studies with OPAL have explored performance on speech and lexical tone perception in Mandarin- and English-speaking CI recipients. However, it was not clear which cues to lexical tone (primary and/or secondary) were used by the Mandarin CI listeners. The primary aim of the present study was to investigate whether OPAL provides improved recognition of Mandarin lexical tones in both quiet and noisy environments compared with the Advanced Combination Encoder (ACE) strategy. A secondary aim was to investigate whether, and to what extent, removal of secondary (duration and intensity envelope) cues to lexical tone affected Mandarin tone perception.</p><p><strong>Design: </strong>Thirty-two CI recipients with an average age of 24 (range 7 to 57) years were enrolled in the study. All recipients had at least 1 year of experience using ACE. Each subject attended two testing sessions, the first to measure baseline performance, and the second to evaluate the effect of strategy after provision of some take-home experience using OPAL. A minimum take-home duration of approximately 4 weeks was prescribed in which subjects were requested to use OPAL as much as possible but were allowed to also use ACE when needed. The evaluation tests included recognition of Mandarin lexical tones in quiet and in noise (signal to noise ratio [SNR] +5 dB) using naturally produced tones and duration/intensity envelope normalized versions of the tones; Mandarin sentence in adaptive noise; Mandarin monosyllabic and disyllabic word in quiet; a subset of Speech, Spatial, and Qualities of hearing questionnaire (SSQ, speech hearing scale); and subjective preference for strategy in quiet and noise.</p><p><strong>Results: </strong>For both the natural and normalized lexical tone tests, mean scores for OPAL were significantly higher than ACE in quiet by 2.7 and 2.9%-points, respectively, and in noise by 7.4 and 7.2%-points, respectively. Monosyllabic word recognition in quiet using OPAL was significantly higher than ACE by approximately 7.5% points. Average SSQ ratings for OPAL were significantly higher than ACE by approximately 0.5 points on a 10-point scale. In quiet conditions, 14 subjects preferred OPAL, 7 expressed a preference for ACE, and 9 reported no preference. Compared with quiet, in noisy situations, there was a stronger preference for OPAL (19 recipients), a similar preference for ACE (7 recipients), while fewer expressed no preference. Average daily take-home use of ACE and OPAL was 4.9 and 7.1 hr, respectively.</p><p><strong>Conclusions: </strong>For Mandarin-speaking CI recipients, OPAL provided significant improvements to lexical tone perception for natural and normalized tones in quiet and noise, monosyllabic word recog","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"210-222"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11637569/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141894965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-01-01Epub Date: 2024-09-05DOI: 10.1097/AUD.0000000000001579
Amit Walia, Amanda J Ortmann, Shannon Lefler, Timothy A Holden, Sidharth V Puram, Jacques A Herzog, Craig A Buchman
{"title":"Electrocochleography-Based Tonotopic Map: I. Place Coding of the Human Cochlea With Hearing Loss.","authors":"Amit Walia, Amanda J Ortmann, Shannon Lefler, Timothy A Holden, Sidharth V Puram, Jacques A Herzog, Craig A Buchman","doi":"10.1097/AUD.0000000000001579","DOIUrl":"10.1097/AUD.0000000000001579","url":null,"abstract":"<p><strong>Objectives: </strong>Due to the challenges of direct in vivo measurements in humans, previous studies of cochlear tonotopy primarily utilized human cadavers and animal models. This study uses cochlear implant electrodes as a tool for intracochlear recordings of acoustically evoked responses to achieve two primary goals: (1) to map the in vivo tonotopy of the human cochlea, and (2) to assess the impact of sound intensity and the creation of an artificial \"third window\" on this tonotopic map.</p><p><strong>Design: </strong>Fifty patients with hearing loss received cochlear implant electrode arrays. Postimplantation, pure-tone acoustic stimuli (0.25 to 4 kHz) were delivered, and electrophysiological responses were recorded from all 22 electrode contacts. The analysis included fast Fourier transformation to determine the amplitude of the first harmonic, indicative of predominantly outer hair cell activity, and tuning curves to identify the best frequency (BF) electrode. These measures, coupled with postoperative imaging for precise electrode localization, facilitated the construction of an in vivo frequency-position function. The study included a specific examination of 2 patients with auditory neuropathy spectrum disorder (ANSD), with preserved cochlear function as assessed by present distortion-product otoacoustic emissions, to determine the impact of sound intensity on the frequency-position map. In addition, the electrophysiological map was recorded in a patient undergoing a translabyrinthine craniotomy for vestibular schwannoma removal, before and after creating an artificial third window, to explore whether an experimental artifact conducted in cadaveric experiments, as was performed in von Békésy landmark experiments, would produce a shift in the frequency-position map.</p><p><strong>Results: </strong>A significant deviation from the Greenwood model was observed in the electrophysiological frequency-position function, particularly at high-intensity stimulations. In subjects with hearing loss, frequency tuning, and BF location remained consistent across sound intensities. In contrast, ANSD patients exhibited Greenwood-like place coding at low intensities (~40 dB SPL) and a basal shift in BF location at higher intensities (~70 dB SPL or greater). Notably, creating an artificial \"third-window\" did not alter the frequency-position map.</p><p><strong>Conclusions: </strong>This study successfully maps in vivo tonotopy of human cochleae with hearing loss, demonstrating a near-octave shift from traditional frequency-position maps. In patients with ANSD, representing more typical cochlear function, intermediate intensity levels (~70 to 80 dB SPL) produced results similar to high-intensity stimulation. These findings highlight the influence of stimulus intensity on the cochlear operational point in subjects with hearing loss. This knowledge could enhance cochlear implant programming and improve auditory rehabilitation by more accurately align","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"253-264"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11649476/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-01-01Epub Date: 2024-07-25DOI: 10.1097/AUD.0000000000001558
Ana Margarida Amorim, Ana Beatriz Ramada, Ana Cristina Lopes, João Lemos, João Carlos Ribeiro
{"title":"Balance Control Impairments in Usher Syndrome.","authors":"Ana Margarida Amorim, Ana Beatriz Ramada, Ana Cristina Lopes, João Lemos, João Carlos Ribeiro","doi":"10.1097/AUD.0000000000001558","DOIUrl":"10.1097/AUD.0000000000001558","url":null,"abstract":"<p><strong>Objectives: </strong>To explore postural disability in Usher Syndrome (USH) patients using temporal posturographic analysis to better elucidate sensory compensation strategies of deafblind patients for posture control and correlate the Activities-specific Balance Confidence (ABC) scale with posturographic variables.</p><p><strong>Design: </strong>Thirty-four genetically confirmed USH patients (11 USH1, 21 USH2, 2 USH 4) from the Otolaryngology Outpatient Clinic and 35 controls were prospectively studied using both classical and wavelet temporal analysis of center of pressure (CoP) under different visual conditions on static and dynamic platforms. The functional impact of balance was assessed with the ABC scale. Classical data in the spatial domain, Sensorial Organization Test, and frequency analysis of the CoP were analyzed.</p><p><strong>Results: </strong>On unstable surfaces, USH1 had greater CoP surface area with eyes open (38.51 ± 68.67) and closed (28.14 ± 31.64) versus controls (3.31 ± 4.60), p < 0.001 and (7.37 ± 7.91), p < 0.001, respectively. On an unstable platform, USH consistently showed increased postural sway, with elevated angular velocity versus controls with eyes open (USH1 [44.94 ± 62.54]; USH2 [55.64 ± 38.61]; controls [13.4 ± 8.57]) ( p = 0.003; p < 0.001) and closed (USH1 [60.36 ± 49.85], USH2 [57.62 ± 42.36]; controls [27.31 ± 19.79]) ( p = 0.002; p = 0.042). USH visual impairment appears to be the primary factor influencing postural deficits, with a statistically significant difference observed in the visual Sensorial Organization Test ratio for USH1 (80.73 ± 40.07, p = 0.04) and a highly significant difference for USH2 (75.48 ± 31.67, p < 0.001) versus controls (100). In contrast, vestibular ( p = 0.08) and somatosensory ( p = 0.537) factors did not reach statistical significance. USH exhibited lower visual dependence than controls (30.31 ± 30.08) (USH1 [6 ± 11.46], p = 0.004; USH2 [8 ± 14.15], p = 0.005). The postural instability index, that corresponds to the ratio of spectral power index and canceling time, differentiated USH from controls on unstable surface with eyes open USH1 (3.33 ± 1.85) p < 0.001; USH2 (3.87 ± 1.05) p < 0.002; controls (1.91 ± 0.85) and closed USH1 (3.91 ± 1.65) p = 0.005; USH2 (3.92 ± 1.05) p = 0.045; controls (2.74 ± 1.27), but not USH1 from USH2. The canceling time in the anteroposterior direction in lower zone distinguished USH subtypes on stable surface with optokinetic USH1 (0.88 ± 1.03), USH2 (0.29 ± 0.23), p = 0.026 and on unstable surface with eyes open USH1 (0.56 ± 1.26), USH2 (0.072 ± 0.09), p = 0.036. ABC scale could distinguish between USH patients and controls, but not between USH subtypes and it correlated with CoP surface area on unstable surface with eyes open only in USH1( ρ = 0.714, p = 0.047).</p><p><strong>Conclusions: </strong>USH patients, particularly USH1, exhibited poorer balance control than controls on unstable platform with eyes open and appeared to rely mor","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"44-52"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141762794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-01-01Epub Date: 2024-08-12DOI: 10.1097/AUD.0000000000001561
Steven C Marcrum, Lori Rakita, Erin M Picou
{"title":"Effect of Sound Genre on Emotional Responses for Adults With and Without Hearing Loss.","authors":"Steven C Marcrum, Lori Rakita, Erin M Picou","doi":"10.1097/AUD.0000000000001561","DOIUrl":"10.1097/AUD.0000000000001561","url":null,"abstract":"<p><strong>Objectives: </strong>Adults with permanent hearing loss exhibit a reduced range of valence ratings in response to nonspeech sounds; however, the degree to which sound genre might affect such ratings is unclear. The purpose of this study was to determine if ratings of valence covary with sound genre (e.g., social communication, technology, music), or only expected valence (pleasant, neutral, unpleasant).</p><p><strong>Design: </strong>As part of larger study protocols, participants rated valence and arousal in response to nonspeech sounds. For this study, data were reanalyzed by assigning sounds to unidimensional genres and evaluating relationships between hearing loss, age, and gender and ratings of valence. In total, results from 120 adults with normal hearing (M = 46.3 years, SD = 17.7, 33 males and 87 females) and 74 adults with hearing loss (M = 66.1 years, SD = 6.1, 46 males and 28 females) were included.</p><p><strong>Results: </strong>Principal component analysis confirmed valence ratings loaded onto eight unidimensional factors: positive and negative social communication, positive and negative technology, music, animal, activities, and human body noises. Regression analysis revealed listeners with hearing loss rated some genres as less extreme (less pleasant/less unpleasant) than peers with better hearing, with the relationship between hearing loss and valence ratings being similar across genres within an expected valence category. In terms of demographic factors, female gender was associated with less pleasant ratings of negative social communication, positive and negative technology, activities, and human body noises, while increasing age was related to a subtle rise in valence ratings across all genres.</p><p><strong>Conclusions: </strong>Taken together, these results confirm and extend previous findings that hearing loss is related to a reduced range of valence ratings and suggest that this effect is mediated by expected sound valence, rather than sound genre.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"34-43"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141918188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-01-01Epub Date: 2024-07-26DOI: 10.1097/AUD.0000000000001556
Leanne Sijgers, Christof Röösli, Rahel Bertschinger, Lorenz Epprecht, Dorothe Veraguth, Adrian Dalbert, Alexander Huber, Flurin Pfiffner
{"title":"The Inter-Phase Gap Offset Effect as a Measure of Neural Health in Cochlear Implant Users With Residual Acoustic Hearing.","authors":"Leanne Sijgers, Christof Röösli, Rahel Bertschinger, Lorenz Epprecht, Dorothe Veraguth, Adrian Dalbert, Alexander Huber, Flurin Pfiffner","doi":"10.1097/AUD.0000000000001556","DOIUrl":"10.1097/AUD.0000000000001556","url":null,"abstract":"<p><strong>Objectives: </strong>The inter-phase gap (IPG) offset effect is defined as the dB offset between the linear parts of electrically evoked compound action potential (ECAP) amplitude growth functions for two stimuli differing only in IPG. The method was recently suggested to represent neural health in cochlear implant (CI) users while being unaffected by CI electrode impedances. Hereby, a larger IPG offset effect should reflect better neural health. The aims of the present study were to (1) examine whether the IPG offset effect negatively correlates with the ECAP threshold and the preoperative pure-tone average (PTA) in CI recipients with residual acoustic hearing and (2) investigate the dependency of the IPG offset effect on hair cell survival and intracochlear electrode impedances.</p><p><strong>Design: </strong>Seventeen adult study participants with residual acoustic hearing at 500 Hz undergoing CI surgery at the University Hospital of Zurich were prospectively enrolled. ECAP thresholds, IPG offset effects, electrocochleography (ECochG) responses to 500 Hz tone bursts, and monopolar electrical impedances were obtained at an apical, middle, and basal electrode set during and between 4 and 12 weeks after CI surgery. Pure-tone audiometry was conducted within 3 weeks before surgery and approximately 6 weeks after surgery. Linear mixed regression analyses and t tests were performed to assess relationships between (changes in) ECAP threshold, IPG offset, impedance, PTA, and ECochG amplitude.</p><p><strong>Results: </strong>The IPG offset effect positively correlated with the ECAP threshold in intraoperative recordings ( p < 0.001) and did not significantly correlate with the preoperative PTA ( p = 0.999). The IPG offset showed a postoperative decrease for electrode sets that showed an ECochG amplitude drop. This IPG offset decrease was significantly larger than for electrode sets that showed no ECochG amplitude decrease, t (17) = 2.76, p = 0.014. Linear mixed regression analysis showed no systematic effect of electrode impedance changes on the IPG offset effect ( p = 0.263) but suggested a participant-dependent effect of electrode impedance on IPG offset.</p><p><strong>Conclusions: </strong>The present study results did not reveal the expected relationships between the IPG offset effect and ECAP threshold values or between the IPG offset effect and preoperative acoustic hearing. Changes in electrode impedance did not exhibit a direct impact on the IPG offset effect, although this impact might be individualized among CI recipients. Overall, our findings suggest that the interpretation and application of the IPG offset effect in clinical settings should be approached with caution considering its complex relationships with other cochlear and neural health metrics.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"83-97"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11637583/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141762795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-01-01Epub Date: 2024-09-06DOI: 10.1097/AUD.0000000000001571
Meibian Zhang, Anke Zeng, Hua Zou, Jiarui Xin, Shibiao Su, Wei Qiu, Xin Sun
{"title":"Developing a Framework for Industrial Noise Risk Management Based on Noise Kurtosis and Its Adjustment.","authors":"Meibian Zhang, Anke Zeng, Hua Zou, Jiarui Xin, Shibiao Su, Wei Qiu, Xin Sun","doi":"10.1097/AUD.0000000000001571","DOIUrl":"10.1097/AUD.0000000000001571","url":null,"abstract":"<p><strong>Objectives: </strong>Noise risk control or management based on noise level has been documented, but noise risk management based on a combination of noise level and noise's temporal structure is rarely reported. This study aimed to develop a framework for industrial noise risk management based on noise kurtosis (reflecting noise's temporal structure) and its adjustment for the noise level.</p><p><strong>Design: </strong>A total of 2805 Chinese manufacturing workers were investigated using a cross-sectional survey. The noise exposure data of each subject included L EX,8h , cumulative noise exposure (CNE), kurtosis, and kurtosis-adjusted L EX,8h (L EX,8h -K). Noise-induced permanent threshold shifts were estimated at 3, 4, and 6 kHz frequencies (NIPTS 346 ) and 1, 2, 3, and 4 kHz frequencies (NIPTS 1234 ). The prevalence of high-frequency noise-induced hearing loss prevalence (HFNIHL%) and noise-induced hearing impairment (NIHI%) were determined. Risk 346 or Risk 1234 was predicted using the ISO 1999 or NIOSH 1998 model. A noise risk management framework based on kurtosis and its adjustment was developed.</p><p><strong>Results: </strong>Kurtosis could identify the noise type; Kurtosis combining noise levels could identify the homogeneous noise exposure group (HNEG) among workers. Noise kurtosis was a risk factor of HFNIHL or NIHI with an adjusted odds ratio of 1.57 or 1.52 ( p < 0.01). At a similar CNE level, the NIPTS 346 , HFNIHL%, NIPTS 1234 , or NIHI% increased with increasing kurtosis. A nonlinear regression equation (expressed by logistic function) could rebuild a reliable dose-effect relationship between L EX,8h -K and NIPTS 346 at the 70 to 95 dB(A) noise level range. After the kurtosis adjustment, the median L EX,8h was increased by 5.45 dB(A); the predicted Risk 346 and Risk 1234 were increased by 11.2 and 9.5%, respectively; NIPTS 346 -K of complex noise at exposure level <80, 80 to 85, and 85 to 90 dB(A), determined from the nonlinear regression equation, was almost the same as the Gaussian noise. Risk management measures could be recommended based on the exposure risk rating or the kurtosis-adjusted action levels (e.g., the lower and upper action levels were 80 and 85 dB(A), respectively).</p><p><strong>Conclusions: </strong>The kurtosis and its adjustment for noise levels can be used to develop an occupational health risk management framework for industrial noise. More human studies are needed to verify the risk management framework.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"196-209"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11637571/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142141860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-01-01Epub Date: 2024-07-24DOI: 10.1097/AUD.0000000000001569
Eun Kyung Jeon, Virginia Driscoll, Bruna S Mussoi, Rachel Scheperle, Emily Guthe, Kate Gfeller, Paul J Abbas, Carolyn J Brown
{"title":"Evaluating Changes in Adult Cochlear Implant Users' Brain and Behavior Following Auditory Training.","authors":"Eun Kyung Jeon, Virginia Driscoll, Bruna S Mussoi, Rachel Scheperle, Emily Guthe, Kate Gfeller, Paul J Abbas, Carolyn J Brown","doi":"10.1097/AUD.0000000000001569","DOIUrl":"10.1097/AUD.0000000000001569","url":null,"abstract":"<p><strong>Objectives: </strong>To describe the effects of two types of auditory training on both behavioral and physiological measures of auditory function in cochlear implant (CI) users, and to examine whether a relationship exists between the behavioral and objective outcome measures.</p><p><strong>Design: </strong>This study involved two experiments, both of which used a within-subject design. Outcome measures included behavioral and cortical electrophysiological measures of auditory processing. In Experiment I, 8 CI users participated in a music-based auditory training. The training program included both short training sessions completed in the laboratory as well as a set of 12 training sessions that participants completed at home over the course of a month. As part of the training program, study participants listened to a range of different musical stimuli and were asked to discriminate stimuli that differed in pitch or timbre and to identify melodic changes. Performance was assessed before training and at three intervals during and after training was completed. In Experiment II, 20 CI users participated in a more focused auditory training task: the detection of spectral ripple modulation depth. Training consisted of a single 40-minute session that took place in the laboratory under the supervision of the investigators. Behavioral and physiologic measures of spectral ripple modulation depth detection were obtained immediately pre- and post-training. Data from both experiments were analyzed using mixed linear regressions, paired t tests, correlations, and descriptive statistics.</p><p><strong>Results: </strong>In Experiment I, there was a significant improvement in behavioral measures of pitch discrimination after the study participants completed the laboratory and home-based training sessions. There was no significant effect of training on electrophysiologic measures of the auditory N1-P2 onset response and acoustic change complex (ACC). There were no significant relationships between electrophysiologic measures and behavioral outcomes after the month-long training. In Experiment II, there was no significant effect of training on the ACC, although there was a small but significant improvement in behavioral spectral ripple modulation depth thresholds after the short-term training.</p><p><strong>Conclusions: </strong>This study demonstrates that auditory training improves spectral cue perception in CI users, with significant perceptual gains observed despite cortical electrophysiological responses like the ACC not reliably predicting training benefits across short- and long-term interventions. Future research should further explore individual factors that may lead to greater benefit from auditory training, in addition to optimization of training protocols and outcome measures, as well as demonstrate the generalizability of these findings.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"150-162"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11649490/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-01-01Epub Date: 2024-07-24DOI: 10.1097/AUD.0000000000001566
Xueying Fu, Fren T Y Smulders, Lars Riecke
{"title":"Touch Helps Hearing: Evidence From Continuous Audio-Tactile Stimulation.","authors":"Xueying Fu, Fren T Y Smulders, Lars Riecke","doi":"10.1097/AUD.0000000000001566","DOIUrl":"10.1097/AUD.0000000000001566","url":null,"abstract":"<p><strong>Objectives: </strong>Identifying target sounds in challenging environments is crucial for daily experiences. It is important to note that it can be enhanced by nonauditory stimuli, for example, through lip-reading in an ongoing conversation. However, how tactile stimuli affect auditory processing is still relatively unclear. Recent studies have shown that brief tactile stimuli can reliably facilitate auditory perception, while studies using longer-lasting audio-tactile stimulation yielded conflicting results. This study aimed to investigate the impact of ongoing pulsating tactile stimulation on basic auditory processing.</p><p><strong>Design: </strong>In experiment 1, the electroencephalogram (EEG) was recorded while 24 participants performed a loudness-discrimination task on a 4-Hz modulated tone-in-noise and received either in-phase, anti-phase, or no 4-Hz electrotactile stimulation above the median nerve. In experiment 2, another 24 participants were presented with the same tactile stimulation as before, but performed a tone-in-noise detection task while their selective auditory attention was manipulated.</p><p><strong>Results: </strong>We found that in-phase tactile stimulation enhanced EEG responses to the tone, whereas anti-phase tactile stimulation suppressed these responses. No corresponding tactile effects on loudness-discrimination performance were observed in experiment 1. Using a yes/no paradigm in experiment 2, we found that in-phase tactile stimulation, but not anti-phase tactile stimulation, improved detection thresholds. Selective attention also improved thresholds but did not modulate the observed benefit from in-phase tactile stimulation.</p><p><strong>Conclusions: </strong>Our study highlights that ongoing in-phase tactile input can enhance basic auditory processing as reflected in scalp EEG and detection thresholds. This might have implications for the development of hearing enhancement technologies and interventions.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":"46 1","pages":"184-195"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11637573/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142839743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2024-12-31DOI: 10.1097/AUD.0000000000001622
Kaylah Lalonde, Grace Dwyer, Adam Bosen, Abby Pitts
{"title":"Impact of High- and Low-Pass Acoustic Filtering on Audiovisual Speech Redundancy and Benefit in Children.","authors":"Kaylah Lalonde, Grace Dwyer, Adam Bosen, Abby Pitts","doi":"10.1097/AUD.0000000000001622","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001622","url":null,"abstract":"<p><strong>Objectives: </strong>To investigate the influence of frequency-specific audibility on audiovisual benefit in children, this study examined the impact of high- and low-pass acoustic filtering on auditory-only and audiovisual word and sentence recognition in children with typical hearing. Previous studies show that visual speech provides greater access to consonant place of articulation than other consonant features and that low-pass filtering has a strong impact on perception on acoustic consonant place of articulation. This suggests visual speech may be particularly useful when acoustic speech is low-pass filtered because it provides complementary information about consonant place of articulation. Therefore, we hypothesized that audiovisual benefit would be greater for low-pass filtered words than high-pass filtered speech. We assessed whether this pattern of results would translate to sentence recognition.</p><p><strong>Design: </strong>Children with typical hearing completed auditory-only and audiovisual tests of consonant-vowel-consonant word and sentence recognition across conditions differing in acoustic frequency content: a low-pass filtered condition in which children could only access acoustic content below 2 kHz and a high-pass filtered condition in which children could only access acoustic content above 2 kHz. They also completed a visual-only test of consonant-vowel-consonant word recognition. We analyzed word, consonant, and keyword-in-sentence recognition and consonant feature (place, voice/manner of articulation) transmission accuracy across modalities and filter conditions using binomial general linear mixed models. To assess the degree to which visual speech is complementary versus redundant with acoustic speech, we calculated the proportion of auditory-only target and response consonant pairs that we can tell apart using only visual speech and compared these values between high-pass and low-pass filter conditions.</p><p><strong>Results: </strong>In auditory-only conditions, recognition accuracy was lower for low-pass filtered consonants and consonant features than high-pass filtered consonants and consonant features, especially consonant place of articulation. In visual-only conditions, recognition accuracy was greater for consonant place of articulation than consonant voice/manner of articulation. In addition, auditory consonants in the low-pass filtered condition were more likely to be substituted for visually distinct consonants, meaning that there was more opportunity to use visual cues to supplement missing auditory information in the low-pass filtered condition. Audiovisual benefit for isolated whole words was greater for low-pass filtered speech than high-pass filtered speech. No difference in audiovisual benefit between filter conditions was observed for phonemes, features, or words-in-sentences. Ceiling effects limit the interpretation of these nonsignificant interactions.</p><p><strong>Conclusions: </strong>F","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2024-12-31DOI: 10.1097/AUD.0000000000001632
Elisheba Haro-Hernandez, Patricia Perez-Carpena, Federica Di Berardino, Jose Antonio Lopez-Escamez
{"title":"Hyperacusis and Tinnitus in Vestibular Migraine Patients.","authors":"Elisheba Haro-Hernandez, Patricia Perez-Carpena, Federica Di Berardino, Jose Antonio Lopez-Escamez","doi":"10.1097/AUD.0000000000001632","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001632","url":null,"abstract":"<p><strong>Objectives: </strong>To estimate the prevalence of tinnitus and hyperacusis in patients with vestibular migraine (VM), and to define the association with hearing loss, anxiety, and depression.</p><p><strong>Design: </strong>A cross-sectional, multicenter study including 51 adult patients with definite or probable VM, defined according to the Barany Society diagnostic criteria. Audiological examinations were performed by pure tones extended to high frequencies to assess hearing thresholds. Psychoacoustic (pitch, masking level, and residual inhibition) and psychometric assessment of tinnitus was performed in all patients that reported tinnitus with the following questionnaires: Tinnitus Handicap Inventory (THI), Hypersensitivity to Sound Questionnaire and Hospital Anxiety Depression Scale. Correlation and regression analyses were used to assess the relationship between THI scores hyperacusis, anxiety, and depression in patients with VM.</p><p><strong>Results: </strong>Forty-five of 50 VM patients (90%) were females; 38 out of 50 (75%) patients reported tinnitus. In our series, the most common frequency (pitch) for tinnitus was 8000 Hz. Tinnitus was not associated with hearing loss in patients with VM and the hearing thresholds were similar in VM patients with or without tinnitus. Hyperacusis was reported in 35 (60%) individuals, and in patients with tinnitus, the THI scores were associated with higher scores in Hypersensitivity to Sound Questionnaire, and anxiety and depression subscales of Hospital Anxiety Depression Scale. There were differences in the distribution of hearing loss in patients with hyperacusis, however both groups did not exceed the normal hearing threshold (17.18 ± 13.43 patients with hyperacusis and 11.66 ± 5.41, p = 0.023 in patients without hyperacusis).</p><p><strong>Conclusions: </strong>Tinnitus is a common symptom in patients with VM and it is not related to hearing loss in the standard audiogram. Hyperacusis was associated with tinnitus, anxiety, and depression, but it was not associated with hearing thresholds.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}