Trends in HearingPub Date : 2024-01-01DOI: 10.1177/23312165241232551
Bethany Plain, Hidde Pielage, Sophia E Kramer, Michael Richter, Gabrielle H Saunders, Niek J Versfeld, Adriana A Zekveld, Tanveer A Bhuiyan
{"title":"Combining Cardiovascular and Pupil Features Using k-Nearest Neighbor Classifiers to Assess Task Demand, Social Context, and Sentence Accuracy During Listening.","authors":"Bethany Plain, Hidde Pielage, Sophia E Kramer, Michael Richter, Gabrielle H Saunders, Niek J Versfeld, Adriana A Zekveld, Tanveer A Bhuiyan","doi":"10.1177/23312165241232551","DOIUrl":"10.1177/23312165241232551","url":null,"abstract":"<p><p>In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean = 64.6 years, SD = 9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD = 10.2) for task demand, 88.0% (SD = 7.5) for social context, and 60.0% (SD = 13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10981225/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140319548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impact of Hearing Aids on Language Outcomes in Preschool Children With Mild Bilateral Hearing Loss.","authors":"Yu-Chen Hung, Pei-Hsuan Ho, Pei-Hua Chen, Yi-Shin Tsai, Yi-Jui Li, Hung-Ching Lin","doi":"10.1177/23312165241256721","DOIUrl":"10.1177/23312165241256721","url":null,"abstract":"<p><p>This study aimed to investigate the role of hearing aid (HA) usage in language outcomes among preschool children aged 3-5 years with mild bilateral hearing loss (MBHL). The data were retrieved from a total of 52 children with MBHL and 30 children with normal hearing (NH). The association between demographical, audiological factors and language outcomes was examined. Analyses of variance were conducted to compare the language abilities of HA users, non-HA users, and their NH peers. Furthermore, regression analyses were performed to identify significant predictors of language outcomes. Aided better ear pure-tone average (BEPTA) was significantly correlated with language comprehension scores. Among children with MBHL, those who used HA outperformed the ones who did not use HA across all linguistic domains. The language skills of children with MBHL were comparable to those of their peers with NH. The degree of improvement in audibility in terms of aided BEPTA was a significant predictor of language comprehension. It is noteworthy that 50% of the parents expressed reluctance regarding HA use for their children with MBHL. The findings highlight the positive impact of HA usage on language development in this population. Professionals may therefore consider HAs as a viable treatment option for children with MBHL, especially when there is a potential risk of language delay due to hearing loss. It was observed that 25% of the children with MBHL had late-onset hearing loss. Consequently, the implementation of preschool screening or a listening performance checklist is recommended to facilitate early detection.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11113073/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141076740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2024-01-01DOI: 10.1177/23312165241260041
Larry E Humes, David A Zapala
{"title":"Easy as 1-2-3: Development and Evaluation of a Simple yet Valid Audiogram-Classification System.","authors":"Larry E Humes, David A Zapala","doi":"10.1177/23312165241260041","DOIUrl":"10.1177/23312165241260041","url":null,"abstract":"<p><p>Almost since the inception of the modern-day electroacoustic audiometer a century ago the results of pure-tone audiometry have been characterized by an audiogram. For almost as many years, clinicians and researchers have sought ways to distill the volume and complexity of information on the audiogram. Commonly used approaches have made use of pure-tone averages (PTAs) for various frequency ranges with the PTA for 500, 1000, 2000 and 4000 Hz (PTA4) being the most widely used for the categorization of hearing loss severity. Here, a three-digit triad is proposed as a single-number summary of not only the severity, but also the configuration and bilateral symmetry of the hearing loss. Each digit in the triad ranges from 0 to 9, increasing as the level of the pure-tone hearing threshold level (HTL) increases from a range of optimal hearing (< 10 dB Hearing Level; HL) to complete hearing loss (≥ 90 dB HL). Each digit also represents a different frequency region of the audiogram proceeding from left to right as: (Low, L) PTA for 500, 1000, and 2000 Hz; (Center, C) PTA for 3000, 4000 and 6000 Hz; and (High, H) HTL at 8000 Hz. This LCH Triad audiogram-classification system is evaluated using a large United States (U.S.) national dataset (N = 8,795) from adults 20 to 80 + years of age and two large clinical datasets totaling 8,254 adults covering a similar age range. Its ability to capture variations in hearing function was found to be superior to that of the widely used PTA4.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11179497/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141318660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2024-01-01DOI: 10.1177/23312165241276435
Inga Holube, Stefan Taesler, Saskia Ibelings, Martin Hansen, Jasper Ooster
{"title":"Automated Measurement of Speech Recognition, Reaction Time, and Speech Rate and Their Relation to Self-Reported Listening Effort for Normal-Hearing and Hearing-Impaired Listeners Using various Maskers.","authors":"Inga Holube, Stefan Taesler, Saskia Ibelings, Martin Hansen, Jasper Ooster","doi":"10.1177/23312165241276435","DOIUrl":"10.1177/23312165241276435","url":null,"abstract":"<p><p>In speech audiometry, the speech-recognition threshold (SRT) is usually established by adjusting the signal-to-noise ratio (SNR) until 50% of the words or sentences are repeated correctly. However, these conditions are rarely encountered in everyday situations. Therefore, for a group of 15 young participants with normal hearing and a group of 12 older participants with hearing impairment, speech-recognition scores were determined at SRT and at four higher SNRs using several stationary and fluctuating maskers. Participants' verbal responses were recorded, and participants were asked to self-report their listening effort on a categorical scale (self-reported listening effort, SR-LE). The responses were analyzed using an Automatic Speech Recognizer (ASR) and compared to the results of a human examiner. An intraclass correlation coefficient of <i>r </i>= .993 for the agreement between their corresponding speech-recognition scores was observed. As expected, speech-recognition scores increased with increasing SNR and decreased with increasing SR-LE. However, differences between speech-recognition scores for fluctuating and stationary maskers were observed as a function of SNR, but not as a function of SR-LE. The verbal response time (VRT) and the response speech rate (RSR) of the listeners' responses were measured using an ASR. The participants with hearing impairment showed significantly lower RSRs and higher VRTs compared to the participants with normal hearing. These differences may be attributed to differences in age, hearing, or both. With increasing SR-LE, VRT increased and RSR decreased. The results show the possibility of deriving a behavioral measure, VRT, measured directly from participants' verbal responses during speech audiometry, as a proxy for SR-LE.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11421406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142299020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2024-01-01DOI: 10.1177/23312165231215916
Moritz Wächtler, Pascale Sandmann, Hartmut Meister
{"title":"The Right-Ear Advantage in Static and Dynamic Cocktail-Party Situations.","authors":"Moritz Wächtler, Pascale Sandmann, Hartmut Meister","doi":"10.1177/23312165231215916","DOIUrl":"10.1177/23312165231215916","url":null,"abstract":"<p><p>When presenting two competing speech stimuli, one to each ear, a right-ear advantage (REA) can often be observed, reflected in better speech recognition compared to the left ear. Considering the left-hemispheric dominance for language, the REA has been explained by superior contralateral pathways (structural models) and language-induced shifts of attention to the right (attentional models). There is some evidence that the REA becomes more pronounced, as cognitive load increases. Hence, it is interesting to investigate the REA in static (constant target talker) and dynamic (target changing pseudo-randomly) cocktail-party situations, as the latter is associated with a higher cognitive load than the former. Furthermore, previous research suggests an increasing REA, when listening becomes more perceptually challenging. The present study examined the REA by using virtual acoustics to simulate static and dynamic cocktail-party situations, with three spatially separated talkers uttering concurrent matrix sentences. Sentences were presented at low sound pressure levels or processed with a noise vocoder to increase perceptual load. Sixteen young normal-hearing adults participated in the study. The REA was assessed by means of word recognition scores and a detailed error analysis. Word recognition revealed a greater REA for the dynamic than for the static situations, compatible with the view that an increase in cognitive load results in a heightened REA. Also, the REA depended on the type of perceptual load, as indicated by a higher REA associated with vocoded compared to low-level stimuli. The results of the error analysis support both structural and attentional models of the REA.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10826403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139570355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2024-01-01DOI: 10.1177/23312165241229057
Gloria Araiza-Illan, Luke Meyer, Khiet P Truong, Deniz Başkent
{"title":"Automated Speech Audiometry: Can It Work Using Open-Source Pre-Trained Kaldi-NL Automatic Speech Recognition?","authors":"Gloria Araiza-Illan, Luke Meyer, Khiet P Truong, Deniz Başkent","doi":"10.1177/23312165241229057","DOIUrl":"10.1177/23312165241229057","url":null,"abstract":"<p><p>A practical speech audiometry tool is the digits-in-noise (DIN) test for hearing screening of populations of varying ages and hearing status. The test is usually conducted by a human supervisor (e.g., clinician), who scores the responses spoken by the listener, or online, where software scores the responses entered by the listener. The test has 24-digit triplets presented in an adaptive staircase procedure, resulting in a speech reception threshold (SRT). We propose an alternative automated DIN test setup that can evaluate spoken responses whilst conducted without a human supervisor, using the open-source automatic speech recognition toolkit, Kaldi-NL. Thirty self-reported normal-hearing Dutch adults (19-64 years) completed one DIN + Kaldi-NL test. Their spoken responses were recorded and used for evaluating the transcript of decoded responses by Kaldi-NL. Study 1 evaluated the Kaldi-NL performance through its word error rate (WER), percentage of summed decoding errors regarding only digits found in the transcript compared to the total number of digits present in the spoken responses. Average WER across participants was 5.0% (range 0-48%, SD = 8.8%), with average decoding errors in three triplets per participant. Study 2 analyzed the effect that triplets with decoding errors from Kaldi-NL had on the DIN test output (SRT), using bootstrapping simulations. Previous research indicated 0.70 dB as the typical within-subject SRT variability for normal-hearing adults. Study 2 showed that up to four triplets with decoding errors produce SRT variations within this range, suggesting that our proposed setup could be feasible for clinical applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10943752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140132882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2024-01-01DOI: 10.1177/23312165241229880
Sean R Anderson, Emily Burg, Lukas Suveg, Ruth Y Litovsky
{"title":"Review of Binaural Processing With Asymmetrical Hearing Outcomes in Patients With Bilateral Cochlear Implants.","authors":"Sean R Anderson, Emily Burg, Lukas Suveg, Ruth Y Litovsky","doi":"10.1177/23312165241229880","DOIUrl":"10.1177/23312165241229880","url":null,"abstract":"<p><p>Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10976506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140307503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2024-01-01DOI: 10.1177/23312165241273391
Kan Chen, Bo Yang, Xiaoyan Yue, He Mi, Jianjun Leng, Lujie Li, Haoyu Wang, Yaxin Lai
{"title":"Global, Regional, and National Burdens of Hearing Loss for Children and Adolescents from 1990 to 2019: A Trend Analysis.","authors":"Kan Chen, Bo Yang, Xiaoyan Yue, He Mi, Jianjun Leng, Lujie Li, Haoyu Wang, Yaxin Lai","doi":"10.1177/23312165241273391","DOIUrl":"10.1177/23312165241273391","url":null,"abstract":"<p><p>This study presents a comprehensive analysis of global, regional, and national trends in the burden of hearing loss (HL) among children and adolescents from 1990 to 2019, using data from the Global Burden of Disease study. Over this period, there was a general decline in HL prevalence and years lived with disability (YLDs) globally, with average annual percentage changes (AAPCs) of -0.03% (95% uncertainty interval [UI], -0.04% to -0.01%; <i>p</i> = 0.001) and -0.23% (95% UI, -0.25% to -0.20%; <i>p</i> < 0.001). Males exhibited higher rates of HL prevalence and YLDs than females. Mild and moderate HL were the most common categories across all age groups, but the highest proportion of YLDs was associated with profound HL [22.23% (95% UI, 8.63%-57.53%)]. Among females aged 15-19 years, the prevalence and YLD rates for moderate HL rose, with AAPCs of 0.14% (95% UI, 0.06%-0.22%; <i>p</i> = 0.001) and 0.13% (95% UI, 0.08%-0.18%; <i>p</i> < 0.001). This increase is primarily attributed to age-related and other HL (such as environmental, lifestyle factors, and occupational noise exposure) and otitis media, highlighting the need for targeted research and interventions for this demographic. Southeast Asia and Western Sub-Saharan Africa bore the heaviest HL burden, while High-income North America showed lower HL prevalence and YLD rates but a slight increasing trend in recent years, with AAPCs of 0.13% (95% UI, 0.1%-0.16%; <i>p</i> < 0.001) and 0.08% (95% UI, 0.04% to 0.12%; <i>p</i> < 0.001). Additionally, the analysis revealed a significant negative correlation between sociodemographic index (SDI) and both HL prevalence (<i>r</i> = -0.74; <i>p</i> < 0.001) and YLD (<i>r</i> = -0.76; <i>p</i> < 0.001) rates. However, the changes in HL trends were not significantly correlated with SDI, suggesting that factors beyond economic development, such as policies and cultural practices, also affect HL. Despite the overall optimistic trend, this study emphasizes the continued need to focus on specific high-risk groups and regions to further reduce the HL burden and enhance the quality of life for affected children and adolescents.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11342320/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142019246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2024-01-01DOI: 10.1177/23312165241287622
Vanessa Frei, Raffael Schmitt, Martin Meyer, Nathalie Giroud
{"title":"Processing of Visual Speech Cues in Speech-in-Noise Comprehension Depends on Working Memory Capacity and Enhances Neural Speech Tracking in Older Adults With Hearing Impairment.","authors":"Vanessa Frei, Raffael Schmitt, Martin Meyer, Nathalie Giroud","doi":"10.1177/23312165241287622","DOIUrl":"10.1177/23312165241287622","url":null,"abstract":"<p><p>Comprehending speech in noise (SiN) poses a challenge for older hearing-impaired listeners, requiring auditory and working memory resources. Visual speech cues provide additional sensory information supporting speech understanding, while the extent of such visual benefit is characterized by large variability, which might be accounted for by individual differences in working memory capacity (WMC). In the current study, we investigated behavioral and neurofunctional (i.e., neural speech tracking) correlates of auditory and audio-visual speech comprehension in babble noise and the associations with WMC. Healthy older adults with hearing impairment quantified by pure-tone hearing loss (threshold average: 31.85-57 dB, <i>N</i> = 67) listened to sentences in babble noise in audio-only, visual-only and audio-visual speech modality and performed a pattern matching and a comprehension task, while electroencephalography (EEG) was recorded. Behaviorally, no significant difference in task performance was observed across modalities. However, we did find a significant association between individual working memory capacity and task performance, suggesting a more complex interplay between audio-visual speech cues, working memory capacity and real-world listening tasks. Furthermore, we found that the visual speech presentation was accompanied by increased cortical tracking of the speech envelope, particularly in a right-hemispheric auditory topographical cluster. Post-hoc, we investigated the potential relationships between the behavioral performance and neural speech tracking but were not able to establish a significant association. Overall, our results show an increase in neurofunctional correlates of speech associated with congruent visual speech cues, specifically in a right auditory cluster, suggesting multisensory integration.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11520018/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trends in HearingPub Date : 2024-01-01DOI: 10.1177/23312165241287092
Jan A A van Heteren, Hanneke D van Oorschot, Anne W Wendrich, Jeroen P M Peters, Koenraad S Rhebergen, Wilko Grolman, Robert J Stokroos, Adriana L Smit
{"title":"Sound Localization in Single-Sided Deafness; Outcomes of a Randomized Controlled Trial on the Comparison Between Cochlear Implantation, Bone Conduction Devices, and Contralateral Routing of Signals Hearing Aids.","authors":"Jan A A van Heteren, Hanneke D van Oorschot, Anne W Wendrich, Jeroen P M Peters, Koenraad S Rhebergen, Wilko Grolman, Robert J Stokroos, Adriana L Smit","doi":"10.1177/23312165241287092","DOIUrl":"10.1177/23312165241287092","url":null,"abstract":"<p><p>There is currently a lack of prospective studies comparing multiple treatment options for single-sided deafness (SSD) in terms of long-term sound localization outcomes. This randomized controlled trial (RCT) aims to compare the objective and subjective sound localization abilities of SSD patients treated with a cochlear implant (CI), a bone conduction device (BCD), a contralateral routing of signals (CROS) hearing aid, or no treatment after two years of follow-up. About 120 eligible patients were randomized to cochlear implantation or to a trial period with first a BCD on a headband, then a CROS (or vice versa). After the trial periods, participants opted for a surgically implanted BCD, a CROS, or no treatment. Sound localization accuracy (in three configurations, calculated as percentage correct and root-mean squared error in degrees) and subjective spatial hearing (subscale of the Speech, Spatial and Qualities of hearing (SSQ) questionnaire) were assessed at baseline and after 24 months of follow-up. At the start of follow-up, 28 participants were implanted with a CI, 25 with a BCD, 34 chose a CROS, and 26 opted for no treatment. Participants in the CI group showed better sound localization accuracy and subjective spatial hearing compared to participants in the BCD, CROS, and no-treatment groups at 24 months. Participants in the CI and CROS groups showed improved subjective spatial hearing at 24 months compared to baseline. To conclude, CI outperformed the BCD, CROS, and no-treatment groups in terms of sound localization accuracy and subjective spatial hearing in SSD patients. <b>TRIAL REGISTRATION</b> Netherlands Trial Register (https://onderzoekmetmensen.nl): NL4457, <i>CINGLE</i> trial.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11526308/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}