Trends in Hearing最新文献

筛选
英文 中文
The Right-Ear Advantage in Static and Dynamic Cocktail-Party Situations. 右耳在静态和动态鸡尾酒会中的优势
IF 2.6 2区 医学
Trends in Hearing Pub Date : 2024-01-01 DOI: 10.1177/23312165231215916
Moritz Wächtler, Pascale Sandmann, Hartmut Meister
{"title":"The Right-Ear Advantage in Static and Dynamic Cocktail-Party Situations.","authors":"Moritz Wächtler, Pascale Sandmann, Hartmut Meister","doi":"10.1177/23312165231215916","DOIUrl":"10.1177/23312165231215916","url":null,"abstract":"<p><p>When presenting two competing speech stimuli, one to each ear, a right-ear advantage (REA) can often be observed, reflected in better speech recognition compared to the left ear. Considering the left-hemispheric dominance for language, the REA has been explained by superior contralateral pathways (structural models) and language-induced shifts of attention to the right (attentional models). There is some evidence that the REA becomes more pronounced, as cognitive load increases. Hence, it is interesting to investigate the REA in static (constant target talker) and dynamic (target changing pseudo-randomly) cocktail-party situations, as the latter is associated with a higher cognitive load than the former. Furthermore, previous research suggests an increasing REA, when listening becomes more perceptually challenging. The present study examined the REA by using virtual acoustics to simulate static and dynamic cocktail-party situations, with three spatially separated talkers uttering concurrent matrix sentences. Sentences were presented at low sound pressure levels or processed with a noise vocoder to increase perceptual load. Sixteen young normal-hearing adults participated in the study. The REA was assessed by means of word recognition scores and a detailed error analysis. Word recognition revealed a greater REA for the dynamic than for the static situations, compatible with the view that an increase in cognitive load results in a heightened REA. Also, the REA depended on the type of perceptual load, as indicated by a higher REA associated with vocoded compared to low-level stimuli. The results of the error analysis support both structural and attentional models of the REA.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165231215916"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10826403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139570355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Head and Eye Movements Reveal Compensatory Strategies for Acute Binaural Deficits During Sound Localization. 头部和眼球运动揭示了声音定位过程中急性双耳缺陷的补偿策略
IF 2.7 2区 医学
Trends in Hearing Pub Date : 2024-01-01 DOI: 10.1177/23312165231217910
Robel Z Alemu, Blake C Papsin, Robert V Harrison, Al Blakeman, Karen A Gordon
{"title":"Head and Eye Movements Reveal Compensatory Strategies for Acute Binaural Deficits During Sound Localization.","authors":"Robel Z Alemu, Blake C Papsin, Robert V Harrison, Al Blakeman, Karen A Gordon","doi":"10.1177/23312165231217910","DOIUrl":"10.1177/23312165231217910","url":null,"abstract":"<p><p>The present study aimed to define use of head and eye movements during sound localization in children and adults to: (1) assess effects of stationary versus moving sound and (2) define effects of binaural cues degraded through acute monaural ear plugging. Thirty-three youth (<i>M</i><sub>Age </sub>= 12.9 years) and seventeen adults (<i>M</i><sub>Age </sub>= 24.6 years) with typical hearing were recruited and asked to localize white noise anywhere within a horizontal arc from -60° (left) to +60° (right) azimuth in two conditions (typical binaural and right ear plugged). In each trial, sound was presented at an initial stationary position (L1) and then while moving at ∼4°/s until reaching a second position (L2). Sound moved in five conditions (±40°, ±20°, or 0°). Participants adjusted a laser pointer to indicate L1 and L2 positions. Unrestricted head and eye movements were collected with gyroscopic sensors on the head and eye-tracking glasses, respectively. Results confirmed that accurate sound localization of both stationary and moving sound is disrupted by acute monaural ear plugging. Eye movements preceded head movements for sound localization in normal binaural listening and head movements were larger than eye movements during monaural plugging. Head movements favored the unplugged left ear when stationary sounds were presented in the right hemifield and during sound motion in both hemifields regardless of the movement direction. Disrupted binaural cues have greater effects on localization of moving than stationary sound. Head movements reveal preferential use of the better-hearing ear and relatively stable eye positions likely reflect normal vestibular-ocular reflexes.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165231217910"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10832417/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139651917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Cardiovascular and Pupil Features Using k-Nearest Neighbor Classifiers to Assess Task Demand, Social Context, and Sentence Accuracy During Listening. 利用 k 近邻分类器结合心血管和瞳孔特征,评估听力过程中的任务需求、社会背景和句子准确性。
IF 2.7 2区 医学
Trends in Hearing Pub Date : 2024-01-01 DOI: 10.1177/23312165241232551
Bethany Plain, Hidde Pielage, Sophia E Kramer, Michael Richter, Gabrielle H Saunders, Niek J Versfeld, Adriana A Zekveld, Tanveer A Bhuiyan
{"title":"Combining Cardiovascular and Pupil Features Using k-Nearest Neighbor Classifiers to Assess Task Demand, Social Context, and Sentence Accuracy During Listening.","authors":"Bethany Plain, Hidde Pielage, Sophia E Kramer, Michael Richter, Gabrielle H Saunders, Niek J Versfeld, Adriana A Zekveld, Tanveer A Bhuiyan","doi":"10.1177/23312165241232551","DOIUrl":"10.1177/23312165241232551","url":null,"abstract":"<p><p>In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean  =  64.6 years, SD  =  9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD  =  10.2) for task demand, 88.0% (SD  =  7.5) for social context, and 60.0% (SD  =  13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241232551"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10981225/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140319548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Measurement of Speech Recognition, Reaction Time, and Speech Rate and Their Relation to Self-Reported Listening Effort for Normal-Hearing and Hearing-Impaired Listeners Using various Maskers. 使用各种掩码自动测量正常听力和听力受损听者的语音识别能力、反应时间和语速及其与自述听力努力的关系。
IF 2.6 2区 医学
Trends in Hearing Pub Date : 2024-01-01 DOI: 10.1177/23312165241276435
Inga Holube, Stefan Taesler, Saskia Ibelings, Martin Hansen, Jasper Ooster
{"title":"Automated Measurement of Speech Recognition, Reaction Time, and Speech Rate and Their Relation to Self-Reported Listening Effort for Normal-Hearing and Hearing-Impaired Listeners Using various Maskers.","authors":"Inga Holube, Stefan Taesler, Saskia Ibelings, Martin Hansen, Jasper Ooster","doi":"10.1177/23312165241276435","DOIUrl":"10.1177/23312165241276435","url":null,"abstract":"<p><p>In speech audiometry, the speech-recognition threshold (SRT) is usually established by adjusting the signal-to-noise ratio (SNR) until 50% of the words or sentences are repeated correctly. However, these conditions are rarely encountered in everyday situations. Therefore, for a group of 15 young participants with normal hearing and a group of 12 older participants with hearing impairment, speech-recognition scores were determined at SRT and at four higher SNRs using several stationary and fluctuating maskers. Participants' verbal responses were recorded, and participants were asked to self-report their listening effort on a categorical scale (self-reported listening effort, SR-LE). The responses were analyzed using an Automatic Speech Recognizer (ASR) and compared to the results of a human examiner. An intraclass correlation coefficient of <i>r </i>= .993 for the agreement between their corresponding speech-recognition scores was observed. As expected, speech-recognition scores increased with increasing SNR and decreased with increasing SR-LE. However, differences between speech-recognition scores for fluctuating and stationary maskers were observed as a function of SNR, but not as a function of SR-LE. The verbal response time (VRT) and the response speech rate (RSR) of the listeners' responses were measured using an ASR. The participants with hearing impairment showed significantly lower RSRs and higher VRTs compared to the participants with normal hearing. These differences may be attributed to differences in age, hearing, or both. With increasing SR-LE, VRT increased and RSR decreased. The results show the possibility of deriving a behavioral measure, VRT, measured directly from participants' verbal responses during speech audiometry, as a proxy for SR-LE.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241276435"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11421406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142299020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Cochlear Implants and Music. 社论:人工耳蜗与音乐
IF 2.7 2区 医学
Trends in Hearing Pub Date : 2024-01-01 DOI: 10.1177/23312165241231685
Deborah A Vickers, Brian C J Moore
{"title":"Editorial: Cochlear Implants and Music.","authors":"Deborah A Vickers, Brian C J Moore","doi":"10.1177/23312165241231685","DOIUrl":"10.1177/23312165241231685","url":null,"abstract":"","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241231685"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10874149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139742320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Speech Audiometry: Can It Work Using Open-Source Pre-Trained Kaldi-NL Automatic Speech Recognition? 自动语音测听:使用开源预训练的 Kaldi-NL 自动语音识别技术是否可行?
IF 2.7 2区 医学
Trends in Hearing Pub Date : 2024-01-01 DOI: 10.1177/23312165241229057
Gloria Araiza-Illan, Luke Meyer, Khiet P Truong, Deniz Başkent
{"title":"Automated Speech Audiometry: Can It Work Using Open-Source Pre-Trained Kaldi-NL Automatic Speech Recognition?","authors":"Gloria Araiza-Illan, Luke Meyer, Khiet P Truong, Deniz Başkent","doi":"10.1177/23312165241229057","DOIUrl":"10.1177/23312165241229057","url":null,"abstract":"<p><p>A practical speech audiometry tool is the digits-in-noise (DIN) test for hearing screening of populations of varying ages and hearing status. The test is usually conducted by a human supervisor (e.g., clinician), who scores the responses spoken by the listener, or online, where software scores the responses entered by the listener. The test has 24-digit triplets presented in an adaptive staircase procedure, resulting in a speech reception threshold (SRT). We propose an alternative automated DIN test setup that can evaluate spoken responses whilst conducted without a human supervisor, using the open-source automatic speech recognition toolkit, Kaldi-NL. Thirty self-reported normal-hearing Dutch adults (19-64 years) completed one DIN + Kaldi-NL test. Their spoken responses were recorded and used for evaluating the transcript of decoded responses by Kaldi-NL. Study 1 evaluated the Kaldi-NL performance through its word error rate (WER), percentage of summed decoding errors regarding only digits found in the transcript compared to the total number of digits present in the spoken responses. Average WER across participants was 5.0% (range 0-48%, SD = 8.8%), with average decoding errors in three triplets per participant. Study 2 analyzed the effect that triplets with decoding errors from Kaldi-NL had on the DIN test output (SRT), using bootstrapping simulations. Previous research indicated 0.70 dB as the typical within-subject SRT variability for normal-hearing adults. Study 2 showed that up to four triplets with decoding errors produce SRT variations within this range, suggesting that our proposed setup could be feasible for clinical applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241229057"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10943752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140132882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of Binaural Processing With Asymmetrical Hearing Outcomes in Patients With Bilateral Cochlear Implants. 双耳处理与双侧人工耳蜗患者不对称听力结果的回顾。
IF 2.7 2区 医学
Trends in Hearing Pub Date : 2024-01-01 DOI: 10.1177/23312165241229880
Sean R Anderson, Emily Burg, Lukas Suveg, Ruth Y Litovsky
{"title":"Review of Binaural Processing With Asymmetrical Hearing Outcomes in Patients With Bilateral Cochlear Implants.","authors":"Sean R Anderson, Emily Burg, Lukas Suveg, Ruth Y Litovsky","doi":"10.1177/23312165241229880","DOIUrl":"10.1177/23312165241229880","url":null,"abstract":"<p><p>Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241229880"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10976506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140307503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating Pitch Information From Simulated Cochlear Implant Signals With Deep Neural Networks. 利用深度神经网络从模拟人工耳蜗信号中估计音高信息
IF 2.6 2区 医学
Trends in Hearing Pub Date : 2024-01-01 DOI: 10.1177/23312165241298606
Takanori Ashihara, Shigeto Furukawa, Makio Kashino
{"title":"Estimating Pitch Information From Simulated Cochlear Implant Signals With Deep Neural Networks.","authors":"Takanori Ashihara, Shigeto Furukawa, Makio Kashino","doi":"10.1177/23312165241298606","DOIUrl":"10.1177/23312165241298606","url":null,"abstract":"<p><p>Cochlear implant (CI) users, even with substantial speech comprehension, generally have poor sensitivity to pitch information (or fundamental frequency, F0). This insensitivity is often attributed to limited spectral and temporal resolution in the CI signals. However, the pitch sensitivity markedly varies among individuals, and some users exhibit fairly good sensitivity. This indicates that the CI signal contains sufficient information about F0, and users' sensitivity is predominantly limited by other physiological conditions such as neuroplasticity or neural health. We estimated the upper limit of F0 information that a CI signal can convey by decoding F0 from simulated CI signals (multi-channel pulsatile signals) with a deep neural network model (referred to as the CI model). We varied the number of electrode channels and the pulse rate, which should respectively affect spectral and temporal resolutions of stimulus representations. The F0-estimation performance generally improved with increasing number of channels and pulse rate. For the sounds presented under quiet conditions, the model performance was at best comparable to that of a control waveform model, which received raw-waveform inputs. Under conditions in which background noise was imposed, the performance of the CI model generally degraded by a greater degree than that of the waveform model. The pulse rate had a particularly large effect on predicted performance. These observations indicate that the CI signal contains some information for predicting F0, which is particularly sufficient for targets under quiet conditions. The temporal resolution (represented as pulse rate) plays a critical role in pitch representation under noisy conditions.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241298606"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11693851/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Collaborative Triadic Conversations in Noise on Decision-Making in a General-Knowledge Task. 噪声环境下三方协作对话对一般知识任务决策的影响。
IF 2.6 2区 医学
Trends in Hearing Pub Date : 2024-01-01 DOI: 10.1177/23312165241305058
Ingvi Örnolfsson, Axel Ahrens, Torsten Dau, Tobias May
{"title":"The Effect of Collaborative Triadic Conversations in Noise on Decision-Making in a General-Knowledge Task.","authors":"Ingvi Örnolfsson, Axel Ahrens, Torsten Dau, Tobias May","doi":"10.1177/23312165241305058","DOIUrl":"10.1177/23312165241305058","url":null,"abstract":"<p><p>Collaboration is a key element of many communicative interactions. Analyzing the effect of collaborative interaction on subsequent decision-making tasks offers the potential to quantitatively evaluate criteria that are indicative of successful communication. While many studies have explored how collaboration aids decision-making, little is known about how communicative barriers, such as loud background noise or hearing impairment, affect this process. This study investigated how collaborative triadic conversations held in different background noise levels affected the decision-making of individual group members in a subsequent individual task. Thirty normal-hearing participants were recruited and organized into triads. First, each participant answered a series of binary general knowledge questions and provided a confidence rating along with each response. The questions were then discussed in triads in either loud (78 dB) or soft (48 dB) background noise. Participants then answered the same questions individually again. Three decision-making measures - stay/switch behavior, decision convergence, and voting strategy - were used to assess if and how participants adjusted their initial decisions after the conversations. The results revealed an interaction between initial confidence rating and noise level: participants were more likely to modify their decisions towards high-confidence prior decisions, and this effect was more pronounced when the conversations had taken place in loud noise. We speculate that this may be because low-confidence opinions are less likely to be voiced in noisy environments compared to high-confidence opinions. The findings demonstrate that decision-making tasks can be designed for conversation studies with groups of more than two participants, and that such tasks can be used to explore how communicative barriers impact subsequent decision-making of individual group members.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241305058"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sound Localization in Single-Sided Deafness; Outcomes of a Randomized Controlled Trial on the Comparison Between Cochlear Implantation, Bone Conduction Devices, and Contralateral Routing of Signals Hearing Aids. 单侧耳聋的声音定位;关于人工耳蜗植入、骨传导设备和信号对侧路由助听器之间比较的随机对照试验结果。
IF 2.6 2区 医学
Trends in Hearing Pub Date : 2024-01-01 DOI: 10.1177/23312165241287092
Jan A A van Heteren, Hanneke D van Oorschot, Anne W Wendrich, Jeroen P M Peters, Koenraad S Rhebergen, Wilko Grolman, Robert J Stokroos, Adriana L Smit
{"title":"Sound Localization in Single-Sided Deafness; Outcomes of a Randomized Controlled Trial on the Comparison Between Cochlear Implantation, Bone Conduction Devices, and Contralateral Routing of Signals Hearing Aids.","authors":"Jan A A van Heteren, Hanneke D van Oorschot, Anne W Wendrich, Jeroen P M Peters, Koenraad S Rhebergen, Wilko Grolman, Robert J Stokroos, Adriana L Smit","doi":"10.1177/23312165241287092","DOIUrl":"10.1177/23312165241287092","url":null,"abstract":"<p><p>There is currently a lack of prospective studies comparing multiple treatment options for single-sided deafness (SSD) in terms of long-term sound localization outcomes. This randomized controlled trial (RCT) aims to compare the objective and subjective sound localization abilities of SSD patients treated with a cochlear implant (CI), a bone conduction device (BCD), a contralateral routing of signals (CROS) hearing aid, or no treatment after two years of follow-up. About 120 eligible patients were randomized to cochlear implantation or to a trial period with first a BCD on a headband, then a CROS (or vice versa). After the trial periods, participants opted for a surgically implanted BCD, a CROS, or no treatment. Sound localization accuracy (in three configurations, calculated as percentage correct and root-mean squared error in degrees) and subjective spatial hearing (subscale of the Speech, Spatial and Qualities of hearing (SSQ) questionnaire) were assessed at baseline and after 24 months of follow-up. At the start of follow-up, 28 participants were implanted with a CI, 25 with a BCD, 34 chose a CROS, and 26 opted for no treatment. Participants in the CI group showed better sound localization accuracy and subjective spatial hearing compared to participants in the BCD, CROS, and no-treatment groups at 24 months. Participants in the CI and CROS groups showed improved subjective spatial hearing at 24 months compared to baseline. To conclude, CI outperformed the BCD, CROS, and no-treatment groups in terms of sound localization accuracy and subjective spatial hearing in SSD patients. <b>TRIAL REGISTRATION</b> Netherlands Trial Register (https://onderzoekmetmensen.nl): NL4457, <i>CINGLE</i> trial.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241287092"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11526308/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信