Trends in HearingPub Date : 2025-01-01DOI: 10.1177/23312165241306091
Khaled H A Abdel-Latif, Thomas Koelewijn, Deniz Başkent, Hartmut Meister
{"title":"Assessment of Speech Processing and Listening Effort Associated With Speech-on-Speech Masking Using the Visual World Paradigm and Pupillometry.","authors":"Khaled H A Abdel-Latif, Thomas Koelewijn, Deniz Başkent, Hartmut Meister","doi":"10.1177/23312165241306091","DOIUrl":"10.1177/23312165241306091","url":null,"abstract":"<p><p>Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure \"name-verb-numeral-adjective-object\") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample <i>t</i>-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241306091"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11726529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142972857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01DOI: 10.1177/23312165251320794
Alexina Whitley, Timothy Beechey, Lauren V Hadley
{"title":"Who Said That? The Effect of Hearing Ability on Following Sequential Utterances From Varying Talkers in Noise.","authors":"Alexina Whitley, Timothy Beechey, Lauren V Hadley","doi":"10.1177/23312165251320794","DOIUrl":"10.1177/23312165251320794","url":null,"abstract":"<p><p>Many of our conversations occur in nonideal situations, from the hum of a car to the babble of a cocktail party. Additionally, in conversation, listeners are often required to switch their attention between multiple talkers, which places demands on both auditory and cognitive processes. Speech understanding in such situations appears to be particularly demanding for older adults with hearing impairment. This study examined the effects of age and hearing ability on performance in an online speech recall task. Two target sentences, spoken by the same talker or different talkers, were presented one after the other, analogous to a conversational turn switch. The first target sentence was presented in quiet, and the second target sentence was presented alongside either a noise masker (steady-state speech-shaped noise) or a speech masker (another nontarget sentence). Relative to when the target talker remained the same between sentences, listeners were less accurate at recalling information in the second target sentence when the target talker changed, particularly when the target talker for sentence one became the masker for sentence two. Listeners with poorer speech-in-noise reception thresholds were less accurate in both noise- and speech-masked trials and made more masker confusions in speech-masked trials. Furthermore, an interaction revealed that listeners with poorer speech reception thresholds had particular difficulty when the target talker remained the same. Our study replicates previous research regarding the costs of switching nonspatial attention, extending these findings to older adults with a range of hearing abilities.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320794"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11851761/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01DOI: 10.1177/23312165251317925
Katrina Kate S McClannahan, Sarah McConkey, Julia M Levitan, Thomas L Rodebaugh, Jonathan E Peelle
{"title":"Social Anxiety, Negative Affect, and Hearing Difficulties in Adults.","authors":"Katrina Kate S McClannahan, Sarah McConkey, Julia M Levitan, Thomas L Rodebaugh, Jonathan E Peelle","doi":"10.1177/23312165251317925","DOIUrl":"10.1177/23312165251317925","url":null,"abstract":"<p><p>Subjective ratings of communication function reflect both auditory sensitivity and the situational, social, and emotional consequences of communication difficulties. Listeners interact with people and their environment differently, have various ways of handling stressful situations, and have diverse communication needs. Therefore, understanding the relationship between auditory and mental health factors is crucial for the holistic diagnosis and treatment of communication difficulty, particularly as mental health and communication function may have bidirectional effects. The goal of this study was to evaluate the degree to which social anxiety and negative affect (encompassing generalized anxiety, depression, and anger) contributed to subjective communication function (hearing handicap) in adult listeners. A cross-sectional online survey was administered via REDCap. Primary measures were brief assessments of social anxiety, negative affect, and subjective communication function measures. Participants were 628 adults (408 women, 220 men), ages 19 to 87 years (mean = 43) living in the United States. Results indicated that individuals reporting higher social anxiety and higher negative affect also reported poorer communication function. Multiple linear regression analysis revealed that both negative affect and social anxiety were significant and unique predictors of subjective communication function. Social anxiety and negative affect both significantly, and uniquely, contribute to how much someone feels a hearing loss impacts their daily communication function. Further examination of social anxiety and negative affect in older adults with hearing loss may help researchers and clinicians understand the complex interactions between mental health and sensory function during everyday communication, in this rapidly growing clinical population.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251317925"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11803679/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143366040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Association of Increased Risk of Injury in Adults With Hearing Loss: A Population-Based Cohort Study.","authors":"Kuan-Yu Lai, Hung-Che Lin, Wan-Ting Shih, Wu-Chien Chien, Chi-Hsiang Chung, Mingchih Chen, Jeng-Wen Chen, Hung-Chun Chung","doi":"10.1177/23312165241309589","DOIUrl":"10.1177/23312165241309589","url":null,"abstract":"<p><p>This nationwide retrospective cohort study examines the association between adults with hearing loss (HL) and subsequent injury risk. Utilizing data from the Taiwan National Health Insurance Research Database (2000-2017), the study included 19,480 patients with HL and 77,920 matched controls. Over an average follow-up of 9.08 years, 18.30% of the 97,400 subjects sustained subsequent all-cause injuries. The injury incidence was significantly higher in the HL group compared to the control group (24.04% vs. 16.86%, <i>p </i>< .001). After adjusting for demographics and comorbidities, the adjusted hazard ratio (aHR) for injury in the HL cohort was 2.35 (95% CI: 2.22-2.49). Kaplan-Meier analysis showed significant differences in injury-free survival between the HL and control groups (log-rank test, <i>p </i>< .001). The increased risk was consistent across age groups (18-64 and ≥65 years), with the HL group showing a higher risk of unintentional injuries (aHR: 2.62; 95% CI: 2.45-2.80), including falls (aHR: 2.83; 95% CI: 2.52-3.17) and traffic-related injuries (aHR: 2.38; 95% CI: 2.07-2.74). These findings highlight an independent association between HL and increased injury risk, underscoring the need for healthcare providers to counsel adult HL patients on preventive measures.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241309589"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11736742/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01DOI: 10.1177/23312165241312449
Aaron C Moberly, Liping Du, Terrin N Tamati
{"title":"Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations.","authors":"Aaron C Moberly, Liping Du, Terrin N Tamati","doi":"10.1177/23312165241312449","DOIUrl":"10.1177/23312165241312449","url":null,"abstract":"<p><p>When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241312449"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11742172/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01DOI: 10.1177/23312165251320789
Michael L Smith, Matthew B Winn
{"title":"Repairing Misperceptions of Words Early in a Sentence is More Effortful Than Repairing Later Words, Especially for Listeners With Cochlear Implants.","authors":"Michael L Smith, Matthew B Winn","doi":"10.1177/23312165251320789","DOIUrl":"10.1177/23312165251320789","url":null,"abstract":"<p><p>The process of repairing misperceptions has been identified as a contributor to effortful listening in people who use cochlear implants (CIs). The current study was designed to examine the relative cost of repairing misperceptions at earlier or later parts of a sentence that contained contextual information that could be used to infer words both predictively and retroactively. Misperceptions were enforced at specific times by replacing single words with noise. Changes in pupil dilation were analyzed to track differences in the timing and duration of effort, comparing listeners with typical hearing (TH) or with CIs. Increases in pupil dilation were time-locked to the moment of the missing word, with longer-lasting increases when the missing word was earlier in the sentence. Compared to listeners with TH, CI listeners showed elevated pupil dilation for longer periods of time after listening, suggesting a lingering effect of effort after sentence offset. When needing to mentally repair missing words, CI listeners also made more mistakes on words elsewhere in the sentence, even though these words were not masked. Changes in effort based on the position of the missing word were not evident in basic measures like peak pupil dilation and only emerged when the full-time course was analyzed, suggesting the timing analysis adds new information to our understanding of listening effort. These results demonstrate that some mistakes are more costly than others and incur different levels of mental effort to resolve the mistake, underscoring the information lost when characterizing speech perception with simple measures like percent-correct scores.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165251320789"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11851752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01DOI: 10.1177/23312165241309301
Huiyong Zhang, Brian C J Moore, Feng Jiang, Mingfang Diao, Fei Ji, Xiaodong Li, Chengshi Zheng
{"title":"Neural-WDRC: A Deep Learning Wide Dynamic Range Compression Method Combined With Controllable Noise Reduction for Hearing Aids.","authors":"Huiyong Zhang, Brian C J Moore, Feng Jiang, Mingfang Diao, Fei Ji, Xiaodong Li, Chengshi Zheng","doi":"10.1177/23312165241309301","DOIUrl":"10.1177/23312165241309301","url":null,"abstract":"<p><p>Wide dynamic range compression (WDRC) and noise reduction both play important roles in hearing aids. WDRC provides level-dependent amplification so that the level of sound produced by the hearing aid falls between the hearing threshold and the highest comfortable level of the listener, while noise reduction reduces ambient noise with the goal of improving intelligibility and listening comfort and reducing effort. In most current hearing aids, noise reduction and WDRC are implemented sequentially, but this may lead to distortion of the amplitude modulation patterns of both the speech and the noise. This paper describes a deep learning method, called Neural-WDRC, for implementing both noise reduction and WDRC, employing a two-stage low-complexity network. The network initially estimates the noise alone and the speech alone. Fast-acting compression is applied to the estimated speech and slow-acting compression to the estimated noise, but with a controllable residual noise level to help the user to perceive natural environmental sounds. Neural-WDRC is frame-based, and the output of the current frame is determined only by the current and preceding frames. Neural-WDRC was compared with conventional slow- and fast-acting compression and with signal-to-noise ratio (SNR)-aware compression using objective measures and listening tests based on normal-hearing participants listening to signals processed to simulate the effects of hearing loss and hearing-impaired participants. The objective measures demonstrated that Neural-WDRC effectively reduced negative interactions of speech and noise in highly non-stationary noise scenarios. The listening tests showed that Neural-WDRC was preferred over the other compression methods for speech in non-stationary noises.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241309301"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11770718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2025-01-01DOI: 10.1177/23312165241311721
Onn Wah Lee, Demi Gao, Tommy Peng, Julia Wunderlich, Darren Mao, Gautam Balasubramanian, Colette M McKay
{"title":"Measuring Speech Discrimination Ability in Sleeping Infants Using fNIRS-A Proof of Principle.","authors":"Onn Wah Lee, Demi Gao, Tommy Peng, Julia Wunderlich, Darren Mao, Gautam Balasubramanian, Colette M McKay","doi":"10.1177/23312165241311721","DOIUrl":"10.1177/23312165241311721","url":null,"abstract":"<p><p>This study used functional near-infrared spectroscopy (fNIRS) to measure aspects of the speech discrimination ability of sleeping infants. We examined the morphology of the fNIRS response to three different speech contrasts, namely \"Tea/Ba,\" \"Bee/Ba,\" and \"Ga/Ba.\" Sixteen infants aged between 3 and 13 months old were included in this study and their fNIRS data were recorded during natural sleep. The stimuli were presented using a nonsilence baseline paradigm, where repeated standard stimuli were presented between the novel stimuli blocks without any silence periods. The morphology of fNIRS responses varied between speech contrasts. The data were fit with a model in which the responses were the sum of two independent and concurrent response mechanisms that were derived from previously published fNIRS detection responses. These independent components were an oxyhemoglobin (HbO)-positive early-latency response and an HbO-negative late latency response, hypothesized to be related to an auditory canonical response and a brain arousal response, respectively. The goodness of fit of the model with the data was high with median goodness of fit of 81%. The data showed that both response components had later latency when the left ear was the test ear (<i>p</i> < .05) compared to the right ear and that the negative component, due to brain arousal, was smallest for the most subtle contrast, \"Ga/Ba\" (<i>p</i> = .003).</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"29 ","pages":"23312165241311721"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2024-09-14DOI: 10.1177/23312165241266322
David López-Ramos, Miriam I. Marrufo-Pérez, Almudena Eustaquio-Martín, Luis E. López-Bascuas, Enrique A. Lopez-Poveda
{"title":"Adaptation to Noise in Spectrotemporal Modulation Detection and Word Recognition","authors":"David López-Ramos, Miriam I. Marrufo-Pérez, Almudena Eustaquio-Martín, Luis E. López-Bascuas, Enrique A. Lopez-Poveda","doi":"10.1177/23312165241266322","DOIUrl":"https://doi.org/10.1177/23312165241266322","url":null,"abstract":"Noise adaptation is the improvement in auditory function as the signal of interest is delayed in the noise. Here, we investigated if noise adaptation occurs in spectral, temporal, and spectrotemporal modulation detection as well as in speech recognition. Eighteen normal-hearing adults participated in the experiments. In the modulation detection tasks, the signal was a 200ms spectrally and/or temporally modulated ripple noise. The spectral modulation rate was two cycles per octave, the temporal modulation rate was 10 Hz, and the spectrotemporal modulations combined these two modulations, which resulted in a downward-moving ripple. A control experiment was performed to determine if the results generalized to upward-moving ripples. In the speech recognition task, the signal consisted of disyllabic words unprocessed or vocoded to maintain only envelope cues. Modulation detection thresholds at 0 dB signal-to-noise ratio and speech reception thresholds were measured in quiet and in white noise (at 60 dB SPL) for noise-signal onset delays of 50 ms (early condition) and 800 ms (late condition). Adaptation was calculated as the threshold difference between the early and late conditions. Adaptation in word recognition was statistically significant for vocoded words (2.1 dB) but not for natural words (0.6 dB). Adaptation was found to be statistically significant in spectral (2.1 dB) and temporal (2.2 dB) modulation detection but not in spectrotemporal modulation detection (downward ripple: 0.0 dB, upward ripple: −0.4 dB). Findings suggest that noise adaptation in speech recognition is unrelated to improvements in the encoding of spectrotemporal modulation cues.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142256779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2024-04-27DOI: 10.1177/23312165241240572
Maartje M. E. Hendrikse, Gertjan Dingemanse, André Goedegebure
{"title":"On the Feasibility of Using Behavioral Listening Effort Test Methods to Evaluate Auditory Performance in Cochlear Implant Users","authors":"Maartje M. E. Hendrikse, Gertjan Dingemanse, André Goedegebure","doi":"10.1177/23312165241240572","DOIUrl":"https://doi.org/10.1177/23312165241240572","url":null,"abstract":"Realistic outcome measures that reflect everyday hearing challenges are needed to assess hearing aid and cochlear implant (CI) fitting. Literature suggests that listening effort measures may be more sensitive to differences between hearing-device settings than established speech intelligibility measures when speech intelligibility is near maximum. Which method provides the most effective measurement of listening effort for this purpose is currently unclear. This study aimed to investigate the feasibility of two tests for measuring changes in listening effort in CI users due to signal-to-noise ratio (SNR) differences, as would arise from different hearing-device settings. By comparing the effect size of SNR differences on listening effort measures with test–retest differences, the study evaluated the suitability of these tests for clinical use. Nineteen CI users underwent two listening effort tests at two SNRs (+4 and +8 dB relative to individuals’ 50% speech perception threshold). We employed dual-task paradigms—a sentence-final word identification and recall test (SWIRT) and a sentence verification test (SVT)—to assess listening effort at these two SNRs. Our results show a significant difference in listening effort between the SNRs for both test methods, although the effect size was comparable to the test–retest difference, and the sensitivity was not superior to speech intelligibility measures. Thus, the implementations of SVT and SWIRT used in this study are not suitable for clinical use to measure listening effort differences of this magnitude in individual CI users. However, they can be used in research involving CI users to analyze group data.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"37 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140810906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}