Ear and HearingPub Date : 2025-06-19DOI: 10.1097/AUD.0000000000001679
Leigh B Fernandez, Muzna Shehzad, Lauren V Hadley
{"title":"Effects of Hearing Loss on Semantic Prediction: Delayed Prediction for Intelligible Speech When Listening Is Demanding.","authors":"Leigh B Fernandez, Muzna Shehzad, Lauren V Hadley","doi":"10.1097/AUD.0000000000001679","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001679","url":null,"abstract":"<p><strong>Objectives: </strong>Linguistic context can be used during speech listening to predict what a talker will say next. These predictions may be particularly useful in adverse listening conditions, since they can facilitate speech processing. In this study, we investigated the impact of postlingual hearing loss on prediction processes. Because hearing loss leads to a perceptual deficit (i.e., degraded auditory input), that can also have cognitive impacts (i.e., increased competition for cognitive resources due to increased listening effort), it is a naturalistic test case of how different sorts of challenge affect prediction.</p><p><strong>Design: </strong>We report a visual world eye-tracking study run with 3 participant groups: older adults (range: 53 to 80 years old) with normal hearing (n = 30), older adults with hearing loss listening under low demand (n = 32), and older adults with hearing loss listening under high demand (n = 31). Using highly semantically constraining predictable sentences, we analyzed the timecourse of simple associative predictions based on the agent of the sentence (sub-experiment 1), and the timecourse by which these predictions were narrowed with additional constraint provided by the verb (sub-experiment 2).</p><p><strong>Results: </strong>Although there was no effect of group on early agent-based predictions, we saw that the buildup and tailoring of verb-based prediction was delayed with hearing loss and exacerbated by listening demand. As there was no comparable group difference for semantically unconstraining neutral sentences, this cannot be explained as a result of delayed lexical access in the hearing loss groups. We also assessed the cost of incorrect predictions but did not see any group differences.</p><p><strong>Conclusion: </strong>These findings indicate two separable stages of prediction that are differently affected by hearing loss and listening demand (potentially due to changes in listening effort), and reveal delayed prediction as a cognitive impact of hearing loss that could compound simple audibility effects.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144327828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-06-19DOI: 10.1097/AUD.0000000000001687
Joshua G W Bernstein, Matthew J Goupell
{"title":"The Roles of Selective Attention and Asymmetric Experience in Bilateral Speech Interference for Single-Sided Deafness Cochlear Implant and Vocoder Listeners.","authors":"Joshua G W Bernstein, Matthew J Goupell","doi":"10.1097/AUD.0000000000001687","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001687","url":null,"abstract":"<p><strong>Objectives: </strong>For many (especially older) single-sided-deafness (SSD) cochlear-implant (CI) users (one normal hearing and one CI ear), masking speech in the acoustic ear can interfere with CI-ear speech recognition. This study examined two possible explanations for this \"bilateral speech interference.\" First, it might reflect a general (i.e., not specific to spatial hearing or CI use) age-related \"selective-attention\" deficit, with some listeners having difficulty attending to target speech while ignoring an interferer. Second, it could be specific to asymmetric-hearing experience, reflecting maladaptive plasticity with the better ear becoming favored over time.</p><p><strong>Design: </strong>Twenty-eight listeners with bilaterally normal or near-normal hearing (NH) through 4 kHz completed a series of speech-on-speech masking tasks. Vocoder simulations of SSD-CI listening (four- or eight-channel noise-vocoded speech in the right ear, unprocessed speech in the left) tested whether acutely simulated asymmetric hearing would produce interference comparable to that previously observed for 13 SSD-CI listeners. Both groups had a wide age range (NH: 20 to 84 years; SSD-CI: 36 to 74 years) and were therefore expected to exhibit a wide range of selective-attention ability. The primary set of conditions measured bilateral speech interference. Target coordinate-response-measure sentences mixed with a masker of similar fundamental frequency (F0) were presented to the right (vocoded) ear at target-to-masker ratios of 0, 4, 8, or 16 dB. Silence or a copy of the masker was presented to the left (unprocessed) ear. Bilateral speech interference-the performance decrease from adding the masker copy to the left ear-was compared with previous SSD-CI results. NH listeners also completed two additional sets of conditions. The first set measured the F0-difference benefit for unprocessed monaural speech-on-speech masking. This is a likely indicator of non-spatial selective-attention ability, based on previous findings that older adults benefit less than younger adults from target-masker F0 differences. The second set measured contralateral-unmasking benefit. Target and masking speech were presented to the unprocessed ear and the benefit from presenting a copy of the masking speech to the vocoded ear was measured. A linear-mixed model analysis examined relationships between NH bilateral speech interference and age, monaural speech-on-speech masking (to estimate non-spatial selective attention), and contralateral unmasking. An additional analysis compared NH-Vocoder to SSD-CI interference.</p><p><strong>Results: </strong>The strongest predictor of NH-vocoder interference was performance in the monaural different-F0 speech-on-speech masking condition (p = 0.0024). Neither similar-F0 speech-on-speech masking performance, nor age, nor contralateral unmasking accounted for significant additional variance (p = 0.11 to 0.69). Mean SSD-CI interference magnitude ","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144327829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-06-16DOI: 10.1097/AUD.0000000000001694
Graham Naylor, Lauren K Dillard, Oliver Zobay, Gabrielle H Saunders
{"title":"Associations Between Pre-Fitting Factors and 2-Year Hearing Aid Use Persistence, Derived From Health Records and Post-Fitting Battery Order Data of 284,175 US Veterans.","authors":"Graham Naylor, Lauren K Dillard, Oliver Zobay, Gabrielle H Saunders","doi":"10.1097/AUD.0000000000001694","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001694","url":null,"abstract":"<p><strong>Objectives: </strong>To examine associations between factors in domains of general health, hearing status, and demography, and subsequent long-term persistence of hearing aid (HA) use. By examining only non-modifiable factors available before HA fitting, we focus on potential indicators of a need for additional clinical effort to achieve satisfactory outcomes.</p><p><strong>Design: </strong>The initial dataset consisted of Electronic Health Records spanning 2012-2017, for all (731,231) patients with HA orders from U.S. Department of Veterans Affairs audiology between April 1, 2012 and October 31, 2014. Applying inclusion criteria (valid HA use persistence data, complete audiograms, age ≥50 years, audiometric pure-tone average (PTA) ≥25 dB HL, 5-year clearance period for health conditions) and excluding records with codes for cochlear implants, the final sample was 284,175 patients. Independent variables encompassed audiological (PTA, PTA asymmetry, audiogram slope, audiogram complexity, new versus experienced HA user), health (dementia, mild cognitive impairment, other mental health conditions, multimorbidity, in-patient episodes), and demographic (age, race, ethnicity, partnership status, income, urban-rural home location) domains. The outcome measure was HA use persistence at 2 years post-fitting, based on battery orders within 18 months preceding the 2-year mark. Multiple logistic regression modeling was applied with HA use persistence at 2 years post-fitting as outcome. Continuous variables were discretized; missing data were imputed.</p><p><strong>Results: </strong>After adjusting for covariates through the regression model, a significant positive association was found between PTA severity and HA use persistence, while PTA asymmetry, audiogram slope, and audiogram complexity were negatively associated with persistence. Being a new HA user, being diagnosed with dementia or other mental health conditions, and increased multimorbidity were all associated with reduced persistence. Persistence peaked at ages 70 to 79, and decreased for non-White races, Hispanic ethnicity, and those not married. No significant associations were found between persistence and tinnitus, urban-rural living, or mild cognitive impairment (when the model included dementia or other mental health conditions).</p><p><strong>Conclusions: </strong>Pre-fitting personal factors other than audiological ones have independent, and summatively major, influence on HA use persistence. While not being modifiable, some are potentially usable as flags for a differentiated approach to patient management.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-06-16DOI: 10.1097/AUD.0000000000001683
Justin T Fleming, Matthew B Winn
{"title":"Seeing a Talker's Mouth Reduces the Effort of Perceiving Speech and Repairing Perceptual Mistakes for Listeners With Cochlear Implants.","authors":"Justin T Fleming, Matthew B Winn","doi":"10.1097/AUD.0000000000001683","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001683","url":null,"abstract":"<p><strong>Objectives: </strong>Seeing a talker's mouth improves speech intelligibility, particularly for listeners who use cochlear implants (CIs). However, the impacts of visual cues on listening effort for listeners with CIs remain poorly understood, as previous studies have focused on listeners with typical hearing (TH) and featured stimuli that do not invoke effortful cognitive speech perception challenges. This study directly compared the effort of perceiving audiovisual speech between listeners who use CIs and those with TH. Visual cues were hypothesized to yield more relief from listening effort in a cognitively challenging speech perception condition that required listeners to mentally repair a missing word in the auditory stimulus. Eye gaze was simultaneously measured to examine whether the tendency to look toward a talker's mouth would increase during these moments of uncertainty about the speech stimulus.</p><p><strong>Design: </strong>Participants included listeners with CIs and an age-matched group of participants with typical age-adjusted hearing (N = 20 in both groups). The magnitude and time course of listening effort were evaluated using pupillometry. In half of the blocks, phonetic visual cues were severely degraded by selectively blurring the talker's mouth, which preserved stimulus luminance so visual conditions could be compared using pupillometry. Each block included a mixture of trials in which the sentence audio was intact, and trials in which a target word in the auditory stimulus was replaced by noise; the latter required participants to mentally reconstruct the target word upon repeating the sentence. Pupil and gaze data were analyzed using generalized additive mixed-effects models to identify the stretches of time during which effort or gaze strategy differed between conditions.</p><p><strong>Results: </strong>Visual release from effort was greater and lasted longer for listeners with CIs compared with those with TH. Within the CI group, visual cues reduced effort to a greater extent when a missing word needed to be repaired than when the speech was intact. Seeing the talker's mouth also improved speech intelligibility for listeners with CIs, including reducing the number of incoherent verbal responses when repair was required. The two hearing groups deployed different gaze strategies when perceiving audiovisual speech. CI listeners looked more at the mouth overall, even when it was blurred, while TH listeners tended to increase looks to the mouth in the moment following a missing word in the auditory stimulus.</p><p><strong>Conclusions: </strong>Integrating visual cues from a talker's mouth not only improves speech intelligibility but also reduces listening effort, particularly for listeners with CIs. For listeners with CIs (but not those with TH), these visual benefits are magnified when a missed word needs to be mentally corrected-a common occurrence during everyday speech perception for individuals with hearing lo","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-06-09DOI: 10.1097/AUD.0000000000001680
Lyan Porto, Jan Wouters, Astrid van Wieringen
{"title":"Speech Understanding in Noise Under Different Attentional Demands in Children With Typical Hearing and Cochlear Implants.","authors":"Lyan Porto, Jan Wouters, Astrid van Wieringen","doi":"10.1097/AUD.0000000000001680","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001680","url":null,"abstract":"<p><strong>Objectives: </strong>Complex listening environments are common in the everyday life of both adults and children and often require listeners must monitor possible speakers and switch or maintain attention as the situation requires. The aim of the present study was to investigate the effects of these attention dynamics on speech perception in adults, children with typical hearing (TH) and children with cochlear implants (CIs).</p><p><strong>Design: </strong>Twenty-seven adults with TH (mean age 20.8 years), 24 children with TH (mean age 10.6 years), and 8 children with CIs (mean age 10.1 years) were tested on a speech understanding in noise task using AVATAR, a realistic audiovisual paradigm. Participants were asked to repeat the sentence as closely as possible. In one task, participants performed an adaptive speech-in-noise task to determine speech reception thresholds for sentences recorded by a male and a female speaker. In the second task, both male and female speakers could speak simultaneously in controlled conditions that required participants to either switch attention from one to another or maintain attention on the first. Eye-tracking data were collected concomitantly with both listening tasks, providing pupillometry and gaze behavior data. Participants also completed cognitive tests assessing memory, attention, processing speed, and language ability.</p><p><strong>Results: </strong>Listening data showed that all groups had more difficulty switching attention from a distractor to a target than maintaining attention on a target and ignoring an incoming distractor. In the single-talker task, adults performed better than children, and children with TH performed better than children with CIs. In addition, pupillometry data showed that children with CIs exerted more listening effort in the single-talker task. Gaze data suggest that listeners fixate longer on target under more challenging conditions, but if demands on attention become too great, eye movements increase. Cognitive tests supported previous evidence that children with CIs' difficulties in speech understanding in noise are related to difficulties in sustaining attention.</p><p><strong>Conclusions: </strong>Switching attention is more challenging than sustaining attention in listening situations children face every day, including CI users. Furthermore, children with CIs appear to exert effort beyond what is captured by listening tasks and struggle with maintaining attention over longer periods than typically hearing peers, highlighting the need to consider the characteristics of learning environments of children with CIs even if hearing thresholds are in typical range.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-06-06DOI: 10.1097/AUD.0000000000001676
Dakota Bysouth-Young, François Guérit, Lidea Shahidi, Robert P Carlyon
{"title":"Measurement of Spectro-Temporal Processing by Cochlear Implant Users: Effects of Stimulus Level and Validation of an Online Implementation.","authors":"Dakota Bysouth-Young, François Guérit, Lidea Shahidi, Robert P Carlyon","doi":"10.1097/AUD.0000000000001676","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001676","url":null,"abstract":"<p><strong>Objectives: </strong>Evaluating adjustments to cochlear implant (CI) settings is challenging as recipients need time to adapt for optimal speech test performance. The Spectro-Temporal Ripple for Investigating Processor EffectivenesS (STRIPES) test, a language-independent measure of spectro-temporal resolution, has been validated with Advanced Bionics and Cochlear CI systems. This study investigates if performance on the STRIPES test varies with presentation level in a loudspeaker setup and its relationship with outcomes on the British Coordinate Response Measure (CRM) test. In addition, it extends the use of STRIPES and its online version \"webSTRIPES\" to Med-El CI systems.</p><p><strong>Design: </strong>A prospective, single-blind, two-session repeated-measures study was conducted with 10 CI users. The first session included three blocks: pre-test webSTRIPES, STRIPES at three loudspeaker presentation levels (50, 65, and 75 dB SPL), and post-test webSTRIPES. The second session measured the speech reception threshold (SRT70) for CRM sentences with a time-reversed speech masker, presented at the same three levels.</p><p><strong>Results: </strong>Presentation level did not significantly affect STRIPES ripple density thresholds or SRT70 for CRM sentences. A significant correlation was found between STRIPES loudspeaker and webSTRIPES thresholds. WebSTRIPES showed good-to-excellent test-retest reliability. The correlation between CRM SRT70 and STRIPES thresholds, while in the predicted direction, was not statistically significant, likely due to the small sample size (n = 7), which may have limited the power to detect a meaningful relationship.</p><p><strong>Conclusions: </strong>STRIPES and webSTRIPES ripple density threshold scores can be reliably measured with Med-El CI systems, unaffected by presentation level. The STRIPES test is a promising tool for assessing adult CI listener outcomes without requiring prolonged acclimatization to programming changes.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessment of Listening Effort by Eye Blinks and Head Tilt Angle Using a Glasses-Type Wearable Device.","authors":"Masahito Minagi, Kei Tabaru, Hamish Innes-Brown, Manae Kubo, Taiki Komoda, Yuko Kataoka, Mizuo Ando","doi":"10.1097/AUD.0000000000001688","DOIUrl":"10.1097/AUD.0000000000001688","url":null,"abstract":"<p><strong>Objectives: </strong>Listening effort is the mental effort that increases in situations where listening is challenging. Objective indicators are needed to assess listening effort, but no established testing methods can be performed in a daily environment. We used a glasses-type wearable device (JINS MEME, JINS Inc., Tokyo, Japan) equipped with an electrooculography sensor and an acceleration/angular velocity sensor to measure the number of eye blinks and changes in head tilt angle during listening under noise and investigated its use as an objective indicator of listening effort.</p><p><strong>Design: </strong>The study included 16 normal-hearing individuals (mean = 27.94 years, SD = 7.18 years). They wore a glasses-type wearable device and were asked to repeat a passage presented at a sound pressure level of 60 dB sound pressure level. Three conditions were performed with signal to noise ratios (SNRs) of 0, -5, and -10 dB SNR. The number of eye blinks and head tilt angle were measured during the time spent listening to the conversation (listening period) and the time spent repeating it after listening (response period). After each task, the effort and motivation required for that trial were evaluated subjectively on a scale. Friedman tests were performed on the percentage of correct words repeated as well as subjective scores for effort and motivation based on the SNR. A linear mixed model was used to evaluate the effects of SNR and interval (listening period and response period) on the number of eye blinks and head tilt angle. In addition, correlation analysis was performed on each indicator.</p><p><strong>Results: </strong>As the SNR decreased, the correct answer rate and motivation score decreased, and the effort score increased. These changes were significantly greater at -10 dB SNR than in the other 2 conditions. The eye blink rate was significantly higher in the -5 dB SNR condition than at 0 dB SNR, and was significantly higher in the response period than in the listening period, regardless of SNR. The head tilt angle was tilted forward when the SNR decreased in the listening period and response period sections. No significant correlation was observed between the indicators.</p><p><strong>Conclusions: </strong>The number of eye blinks increased during listening in noise, but decreased with decreased subjective motivation. The head tilt angle tilted forward when the noise load increased, indicating that the participant tilted more toward the sound source. The changes in the number of eye blinks and head tilt angle during listening in noise may be objective indicators related to listening effort that can be detected quantitatively and simply using a glasses-type wearable device.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-29DOI: 10.1097/AUD.0000000000001690
Hendrik Christiaan Stronks, Timothy Samuel Arendsen, Mirte Veenstra, Peter-Paul Bernard Marie Boermans, Jeroen Johannes Briaire, Johan Hubertus Maria Frijns
{"title":"Effects of Preoperative Factors on the Learning Curves of Postlingual Cochlear Implant Recipients.","authors":"Hendrik Christiaan Stronks, Timothy Samuel Arendsen, Mirte Veenstra, Peter-Paul Bernard Marie Boermans, Jeroen Johannes Briaire, Johan Hubertus Maria Frijns","doi":"10.1097/AUD.0000000000001690","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001690","url":null,"abstract":"<p><strong>Objectives: </strong>The substantial variability in speech perception outcomes after cochlear implantation complicates efforts to develop valid predictive models of these outcomes. Existing predictive regression models are too unreliable for clinical application, possibly because speech intelligibility (SI) after cochlear implant (CI) rehabilitation is often based on a limited number of assessments. The development of SI after CI has rarely been detailed, although knowing the shape of the learning curve can potentially improve predictive modeling. Knowing the learning curve after CI could also aid in setting expectations about SI immediately after implantation, and the duration of rehabilitation. The current objectives were to construct learning curves to estimate baseline SI at 1 week (B), maximal SI after rehabilitation (M), and rehabilitation time (time to reach 80% of the learning effect; t[M - B]80%), and to subsequently deploy these outcomes for multiple-regression modeling to predict CI outcomes.</p><p><strong>Design: </strong>To assess rehabilitation after cochlear implantation, we retrospectively fitted learning curves using clinically available SI assessments from 533 postlingually deaf, unilaterally implanted adults. SI was assessed with consonant-vowel-consonant words (CVC) in quiet, with phoneme score as the outcome measure. Participants were followed for up to 4 years, with SI measurements collected at fixed intervals. SI was commonly assessed 1, 2, 4, and 8 weeks after device activation. B, M, and t(M - B)80% were determined from the fitted learning curves. Predictive multiple-regression analyses were performed on these three outcome measures based on eight previously identified preoperative demographic and audiometric predictor variables: age at implantation, duration of severe-to-profound hearing loss, best-aided CVC phoneme score (in the free field), unaided ipsilateral and contralateral residual hearing and CVC phoneme scores (measured with headphones), and education type (regular or special education).</p><p><strong>Results: </strong>At 1 week after CI activation, raw phoneme scores had increased from 40% preoperatively (best-aided condition) to 51%, with further improvement to approximately 78% at 4 years. SI increased significantly until 1 year after activation and then plateaued. Fitted learning curves supported better estimates of these parameters, showing that average baseline SI at 1 week after CI activation was 51%, increasing to 85% after rehabilitation. The asymptotic score exceeded the raw average after 4 years because many cases had not yet plateaued. The median t(M - B)80% was 1.5 months. Predictive modeling identified duration of hearing loss, age at implantation, best-aided CVC phoneme score, and education type as the most robust predictors for postoperative SI. Despite the statistically significant correlations, however, the combined predictive value was ~19% for B, 10% for M, and 2% for t(M - B)80%.<","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144176024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-05-29DOI: 10.1097/AUD.0000000000001689
Lisbeth Birkelund Simonsen, Jaime A Undurraga, Abigail Anne Kressner, Torsten Dau, Søren Laugesen
{"title":"Auditory Change Complex Responses to Spectrotemporally Modulated Stimuli.","authors":"Lisbeth Birkelund Simonsen, Jaime A Undurraga, Abigail Anne Kressner, Torsten Dau, Søren Laugesen","doi":"10.1097/AUD.0000000000001689","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001689","url":null,"abstract":"<p><strong>Objectives: </strong>The non-language-dependent Audible Contrast Threshold (ACT) test is a clinically viable spectrotemporal modulation detection test and serves as an alternative to language-specific speech-in-noise tests. However, the ACT test requires active participation, which is naturally challenging for infants, young children, and individuals with developmental or intellectual differences. This article focuses on the specifications and design of an electrophysiological version of ACT (E-ACT). A test paradigm was developed based on auditory change complex (ACC) responses to spectrotemporally modulated stimuli. This study investigated the effects of two potential carriers for the test stimuli, differences in responses between brain hemispheres, represented by left and right mastoids, and the effect of the direction of ACC change to optimally design an E-ACT. Finally, several strategies for defining individual thresholds for the E-ACT were compared.</p><p><strong>Design: </strong>Two experiments were conducted with 18 and 47 adult participants, respectively, all with pure-tone hearing thresholds at or below 75 dB HL at frequencies up to and including 2 kHz. The stimulus, consisting of spectrotemporally modulated targets alternating with unmodulated references, each presented for approximately 1 sec, elicited ACC responses from the participants. In Experiment A, both noise and tonal-carrier stimuli were used, while in Experiment B, only tonal-carrier stimuli were included. Electroencephalogram data were analyzed using the objective Fmpi (individualized multi-point Fsp) detector to estimate whether a response was present.</p><p><strong>Results: </strong>The tonal-carrier stimuli elicited significantly more detected responses compared with the noise-carrier stimuli. Analysis of hemispheric dominance revealed a significantly higher detection rate for ACC responses from the right mastoid compared with the left. However, the highest detection rate was observed when averaging responses from both mastoids. When ACC responses were divided into subcategories based on the direction of auditory change, the reference-to-target change (\"On\") produced a significantly higher detection rate than the target-to-reference change (\"Off\"). Pooling the \"On\" and \"Off\" responses did not increase the detection rates. The most effective strategy for determining the E-ACT threshold was to select the direction of auditory change of the mastoid average that was individually strongest in the first recording at maximum modulation.</p><p><strong>Conclusions: </strong>The present findings suggest that an electrophysiological version of ACT should be based on the tonal-carrier stimulus. To define individual thresholds for an E-ACT, the ACC should be determined as the average of left and right hemispheric responses, using only the direction of auditory change that is individually strongest during the first recording.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144176023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}