Ear and Hearing最新文献

筛选
英文 中文
Using Pupillometry in Virtual Reality as a Tool for Speech-in-Noise Research. 虚拟现实中瞳孔测量技术在噪声语音研究中的应用。
IF 2.8 2区 医学
Ear and Hearing Pub Date : 2025-11-01 Epub Date: 2025-07-02 DOI: 10.1097/AUD.0000000000001692
Hidde Pielage, Bethany Plain, Sjors van de Ven, Gabrielle H Saunders, Niek J Versfeld, Sophia E Kramer, Adriana A Zekveld
{"title":"Using Pupillometry in Virtual Reality as a Tool for Speech-in-Noise Research.","authors":"Hidde Pielage, Bethany Plain, Sjors van de Ven, Gabrielle H Saunders, Niek J Versfeld, Sophia E Kramer, Adriana A Zekveld","doi":"10.1097/AUD.0000000000001692","DOIUrl":"10.1097/AUD.0000000000001692","url":null,"abstract":"<p><strong>Objectives: </strong>Virtual reality (VR) could be used in speech perception research to reduce the gap between the laboratory and real life. However, the suitability of using VR head-mounted displays (HMDs) warrants investigation, especially when pupillometric measurements are required. The present study aimed to assess if pupil measurements taken within an HMD would be sensitive to changes in listening effort related to a speech perception task. Task load of a VR speech-in-noise task was manipulated while pupil size was recorded within an HMD. The present study also assessed if VR could be used to simulate the copresence of other persons during listening, which is often an important aspect of real-life listening. To this end, participants completed the speech-in-noise task both in the copresence of virtual persons (agents) and while the virtual persons were replaced with visual distractors.</p><p><strong>Design: </strong>Thirty-three normal-hearing participants were provided with a VR-HMD and completed a speech-in-noise task in a virtual environment while their pupil size was measured. Participants were simultaneously presented with two sentences-one to each ear-which were masked by stationary noise that was 3 dB louder (-3 dB signal to noise ratio) than the sentences. Task load was manipulated by having participants attend to and repeat either one sentence or both sentences. Participants did the task both while accompanied by two virtual agents who provided positive (head nodding) and negative (head shaking) feedback on some trials, or in the presence of two visual distractors that did not provide feedback (control condition). We assessed the effect of task load and copresence on performance, measures of pupil size (baseline pupil size and peak pupil dilation), and several subjective ratings. Participants also completed two questionnaires related to their experience of the virtual environment.</p><p><strong>Results: </strong>Task load significantly affected baseline pupil size, peak pupil dilation, and subjective ratings of effort, task difficulty, and performance. However, the manipulation of virtual copresence did not affect any of the outcome measures. The effect of task load on performance could not be assessed, as single-sentence conditions often resulted in a ceiling score (100% correct). An exploratory analysis provided some indication that trials following positive feedback from the agents (as compared to no feedback) showed increased baseline pupil sizes. Scores on the questionnaires indicated that participants were not highly immersed in the virtual environment, possibly explaining why they were largely unaffected by the virtual copresence manipulation.</p><p><strong>Conclusions: </strong>The finding that baseline pupil size and peak pupil dilation were sensitive to the manipulation of task load suggests that HMD pupillometry is sensitive to changes in arousal and effort. This supports the idea that VR-HMDs can be successf","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1573-1583"},"PeriodicalIF":2.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12533776/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144546260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Melodic Contour Identification by Cochlear-Implant Listeners With Asymmetric Phantom Pulses Presented to Apical Electrodes. 耳蜗植入听者顶电极非对称幻像脉冲的旋律轮廓识别。
IF 2.8 2区 医学
Ear and Hearing Pub Date : 2025-11-01 Epub Date: 2025-07-03 DOI: 10.1097/AUD.0000000000001691
Olivier Macherey, Robert P Carlyon
{"title":"Melodic Contour Identification by Cochlear-Implant Listeners With Asymmetric Phantom Pulses Presented to Apical Electrodes.","authors":"Olivier Macherey, Robert P Carlyon","doi":"10.1097/AUD.0000000000001691","DOIUrl":"10.1097/AUD.0000000000001691","url":null,"abstract":"<p><strong>Objectives: </strong>(a) To compare performance by cochlear-implant listeners on a melodic contour identification task when the fundamental frequency (F0) is encoded explicitly by single-pulse-per-period (SPP) pulse trains presented to an apical channel, by amplitude modulation of high-rate pulse trains presented to several electrodes, and by these two methods combined, (b) to measure melodic contour identification as a function of the range of F0s tested, (c) to determine whether so-called asymmetric phantom stimulation improves melodic contour identification relative to monopolar stimulation, as has been shown previously using pitch-ranking tasks.</p><p><strong>Design: </strong>Three experiments measured melodic contour identification by cochlear-implant listeners with two different methods of encoding fundamental frequency (F0), both singly and in combination. One method presented SPP pulse trains at the F0 rate to an apical channel in either partial-bipolar or monopolar mode. The second method applied amplitude modulation at F0 to high-rate (~2000 pulses per second) pulse trains on six adjacent electrodes. For this \"MOD\" stimulation, the channel envelopes were misaligned so as to simulate the effects of the bandpass filters in the commercial signal-processing strategy.</p><p><strong>Results: </strong>In experiment 1, the SPP stimulation used the asymmetric phantom method: pseudomonophasic pulses were applied in partial-bipolar mode to electrodes 1 and 3, with 25% of current returned via an extra-cochlear electrode, and with the short high-amplitude phase anodic with respect to electrode 1. The MOD stimuli were presented to a set of basal electrodes. Performance for SPP stimulation was better, both when presented alone and when combined with MOD stimulation, relative to MOD stimulation alone. Performance was also better when the range of F0s present in the stimuli spanned a low range (97 to 194 Hz) than when they spanned a medium (161 to 322 Hz) or a high range (242 to 484 Hz). Experiment 2 was similar to experiment 1 except that the MOD stimuli were presented to a set of six apical electrodes. Performance with SPP stimulation alone was again significantly better than with MOD stimulation, but the difference between combined and MOD stimulation was not significant. Experiment 3 compared performance of SPP stimulation applied in asymmetric phantom mode to monopolar stimulation of the most-apical electrode using symmetric biphasic pulses. No differences were found between these two types of stimulation, either presented in isolation or with MOD stimulation of nearby apical electrodes.</p><p><strong>Conclusions: </strong>The results show that F0 encoding by SPP stimulation was better than with MOD stimulation, that it was robust to possible interference from MOD-stimulated electrodes, but that performance with combined stimulation was not better than with SPP alone. Contrary to previous data from pitch-ranking studies, we found no evid","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1550-1559"},"PeriodicalIF":2.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144555891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The WHAM Study: Socio-Emotional Well-being Effects of Hearing Aid Use and Mediation Through Improved Hearing Ability. WHAM研究:助听器使用对社会情绪幸福感的影响及其通过听力能力改善的调节作用。
IF 2.8 2区 医学
Ear and Hearing Pub Date : 2025-11-01 Epub Date: 2025-07-25 DOI: 10.1097/AUD.0000000000001700
Lotte A Jansen, Marieke F van Wier, Birgit I Lissenberg-Witte, Cas Smits, Sophia E Kramer
{"title":"The WHAM Study: Socio-Emotional Well-being Effects of Hearing Aid Use and Mediation Through Improved Hearing Ability.","authors":"Lotte A Jansen, Marieke F van Wier, Birgit I Lissenberg-Witte, Cas Smits, Sophia E Kramer","doi":"10.1097/AUD.0000000000001700","DOIUrl":"10.1097/AUD.0000000000001700","url":null,"abstract":"<p><strong>Objectives: </strong>Hearing impairment can negatively impact socio-emotional well-being. While hearing aids (HA) may improve hearing ability, communication, social participation, and emotional well-being, longitudinal studies are scarce and evidence quality is low. This longitudinal study examines the associations between (research question [RQ] 1) HA uptake and socio-emotional well-being, mediation by self-perceived hearing disability, and differences between subgroups, (RQ2) frequency of HA use (daily number of hours) and socio-emotional well-being, and (RQ3) duration of HA use (years of use) and socio-emotional well-being.</p><p><strong>Design: </strong>Data from October 2006 to January 2024 from the Netherlands Longitudinal Study on Hearing were used for this study. Every 5 yrs, participants were invited to complete an online digits-in-noise hearing test and survey, which included variables on HA use, psychosocial health, tinnitus, hyperacusis, and self-perceived hearing disability. For RQs 1 and 2, cumulative data from three 5-yr intervals (baseline [T0] to 5-yr follow-up [T1], T1-T2, and T2-T3) was compiled, based on eligibility for a HA at the beginning of the studied time interval but not using it at that time and either reporting HA use (HA uptake) or no HA use (no HA uptake) at follow-up and frequency of use at follow-up. Differences between those who adopted a HA versus those who did not were examined while controlling for pre-(non)uptake socio-emotional outcomes. After applying exclusion criteria, the final samples included n = 281 unique participants for RQ1 and n = 280 for RQ2. For RQ3, participants with 5, 10, or 15 yrs of HA use were identified and analyzed to assess the impact of long-term use, with n = 180 unique participants in the final dataset. Outcomes assessed for each RQ were depression, anxiety, distress, somatization, social loneliness, emotional loneliness, and total loneliness. Gamma regression models with generalized estimating equations were performed to analyze all RQs.</p><p><strong>Results: </strong>Approximately 87% of participants were ≤65 yrs of age at T0. Among individuals without tinnitus, HA uptake was significantly associated with lower depression scores ( p < 0.05). Among those aged >65 yrs, HA uptake was significantly associated with lower total loneliness scores. No significant associations were found between HA uptake and anxiety, somatization, distress, and emotional loneliness. Self-perceived hearing disability did not mediate the relationship between HA uptake and socio-emotional well-being outcomes. No significant associations between the duration of HA use and socio-emotional well-being outcomes were found. Frequency of HA use was not significantly associated with any outcome except somatization, where using a HA for 1 to 4 hrs per day was significantly associated with lower somatization scores.</p><p><strong>Conclusions: </strong>This longitudinal study contributes valuable evidence to","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1641-1651"},"PeriodicalIF":2.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12533782/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Associations Between Pre-Fitting Factors and 2-Year Hearing Aid Use Persistence, Derived From Health Records and Post-Fitting Battery Order Data of 284,175 US Veterans. 来自284,175名美国退伍军人健康记录和验配后电池订单数据的预配因素与2年助听器使用持久性之间的关系
IF 2.8 2区 医学
Ear and Hearing Pub Date : 2025-11-01 Epub Date: 2025-06-16 DOI: 10.1097/AUD.0000000000001694
Graham Naylor, Lauren K Dillard, Oliver Zobay, Gabrielle H Saunders
{"title":"Associations Between Pre-Fitting Factors and 2-Year Hearing Aid Use Persistence, Derived From Health Records and Post-Fitting Battery Order Data of 284,175 US Veterans.","authors":"Graham Naylor, Lauren K Dillard, Oliver Zobay, Gabrielle H Saunders","doi":"10.1097/AUD.0000000000001694","DOIUrl":"10.1097/AUD.0000000000001694","url":null,"abstract":"<p><strong>Objectives: </strong>To examine associations between factors in domains of general health, hearing status, and demography, and subsequent long-term persistence of hearing aid (HA) use. By examining only non-modifiable factors available before HA fitting, we focus on potential indicators of a need for additional clinical effort to achieve satisfactory outcomes.</p><p><strong>Design: </strong>The initial dataset consisted of Electronic Health Records spanning 2012-2017, for all (731,231) patients with HA orders from U.S. Department of Veterans Affairs audiology between April 1, 2012 and October 31, 2014. Applying inclusion criteria (valid HA use persistence data, complete audiograms, age ≥50 years, audiometric pure-tone average (PTA) ≥25 dB HL, 5-year clearance period for health conditions) and excluding records with codes for cochlear implants, the final sample was 284,175 patients. Independent variables encompassed audiological (PTA, PTA asymmetry, audiogram slope, audiogram complexity, new versus experienced HA user), health (dementia, mild cognitive impairment, other mental health conditions, multimorbidity, in-patient episodes), and demographic (age, race, ethnicity, partnership status, income, urban-rural home location) domains. The outcome measure was HA use persistence at 2 years post-fitting, based on battery orders within 18 months preceding the 2-year mark. Multiple logistic regression modeling was applied with HA use persistence at 2 years post-fitting as outcome. Continuous variables were discretized; missing data were imputed.</p><p><strong>Results: </strong>After adjusting for covariates through the regression model, a significant positive association was found between PTA severity and HA use persistence, while PTA asymmetry, audiogram slope, and audiogram complexity were negatively associated with persistence. Being a new HA user, being diagnosed with dementia or other mental health conditions, and increased multimorbidity were all associated with reduced persistence. Persistence peaked at ages 70 to 79, and decreased for non-White races, Hispanic ethnicity, and those not married. No significant associations were found between persistence and tinnitus, urban-rural living, or mild cognitive impairment (when the model included dementia or other mental health conditions).</p><p><strong>Conclusions: </strong>Pre-fitting personal factors other than audiological ones have independent, and summatively major, influence on HA use persistence. While not being modifiable, some are potentially usable as flags for a differentiated approach to patient management.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1595-1602"},"PeriodicalIF":2.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12533770/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seeing a Talker's Mouth Reduces the Effort of Perceiving Speech and Repairing Perceptual Mistakes for Listeners With Cochlear Implants. 看到说话人的嘴可以减少耳蜗植入者感知语言和修复感知错误的努力。
IF 2.8 2区 医学
Ear and Hearing Pub Date : 2025-11-01 Epub Date: 2025-06-16 DOI: 10.1097/AUD.0000000000001683
Justin T Fleming, Matthew B Winn
{"title":"Seeing a Talker's Mouth Reduces the Effort of Perceiving Speech and Repairing Perceptual Mistakes for Listeners With Cochlear Implants.","authors":"Justin T Fleming, Matthew B Winn","doi":"10.1097/AUD.0000000000001683","DOIUrl":"10.1097/AUD.0000000000001683","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Objectives: &lt;/strong&gt;Seeing a talker's mouth improves speech intelligibility, particularly for listeners who use cochlear implants (CIs). However, the impacts of visual cues on listening effort for listeners with CIs remain poorly understood, as previous studies have focused on listeners with typical hearing (TH) and featured stimuli that do not invoke effortful cognitive speech perception challenges. This study directly compared the effort of perceiving audiovisual speech between listeners who use CIs and those with TH. Visual cues were hypothesized to yield more relief from listening effort in a cognitively challenging speech perception condition that required listeners to mentally repair a missing word in the auditory stimulus. Eye gaze was simultaneously measured to examine whether the tendency to look toward a talker's mouth would increase during these moments of uncertainty about the speech stimulus.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Design: &lt;/strong&gt;Participants included listeners with CIs and an age-matched group of participants with typical age-adjusted hearing (N = 20 in both groups). The magnitude and time course of listening effort were evaluated using pupillometry. In half of the blocks, phonetic visual cues were severely degraded by selectively blurring the talker's mouth, which preserved stimulus luminance so visual conditions could be compared using pupillometry. Each block included a mixture of trials in which the sentence audio was intact, and trials in which a target word in the auditory stimulus was replaced by noise; the latter required participants to mentally reconstruct the target word upon repeating the sentence. Pupil and gaze data were analyzed using generalized additive mixed-effects models to identify the stretches of time during which effort or gaze strategy differed between conditions.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;Visual release from effort was greater and lasted longer for listeners with CIs compared with those with TH. Within the CI group, visual cues reduced effort to a greater extent when a missing word needed to be repaired than when the speech was intact. Seeing the talker's mouth also improved speech intelligibility for listeners with CIs, including reducing the number of incoherent verbal responses when repair was required. The two hearing groups deployed different gaze strategies when perceiving audiovisual speech. CI listeners looked more at the mouth overall, even when it was blurred, while TH listeners tended to increase looks to the mouth in the moment following a missing word in the auditory stimulus.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;Integrating visual cues from a talker's mouth not only improves speech intelligibility but also reduces listening effort, particularly for listeners with CIs. For listeners with CIs (but not those with TH), these visual benefits are magnified when a missed word needs to be mentally corrected-a common occurrence during everyday speech perception for individuals with hearing lo","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1502-1518"},"PeriodicalIF":2.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12353114/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Roles of Selective Attention and Asymmetric Experience in Bilateral Speech Interference for Single-Sided Deafness Cochlear Implant and Vocoder Listeners. 选择性注意和非对称经验在单侧耳聋双侧言语干扰中的作用。
IF 2.8 2区 医学
Ear and Hearing Pub Date : 2025-11-01 Epub Date: 2025-06-19 DOI: 10.1097/AUD.0000000000001687
Joshua G W Bernstein, Matthew J Goupell
{"title":"The Roles of Selective Attention and Asymmetric Experience in Bilateral Speech Interference for Single-Sided Deafness Cochlear Implant and Vocoder Listeners.","authors":"Joshua G W Bernstein, Matthew J Goupell","doi":"10.1097/AUD.0000000000001687","DOIUrl":"10.1097/AUD.0000000000001687","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Objectives: &lt;/strong&gt;For many (especially older) single-sided-deafness (SSD) cochlear-implant (CI) users (one normal hearing and one CI ear), masking speech in the acoustic ear can interfere with CI-ear speech recognition. This study examined two possible explanations for this \"bilateral speech interference.\" First, it might reflect a general (i.e., not specific to spatial hearing or CI use) age-related \"selective-attention\" deficit, with some listeners having difficulty attending to target speech while ignoring an interferer. Second, it could be specific to asymmetric-hearing experience, reflecting maladaptive plasticity with the better ear becoming favored over time.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Design: &lt;/strong&gt;Twenty-eight listeners with bilaterally normal or near-normal hearing (NH) through 4 kHz completed a series of speech-on-speech masking tasks. Vocoder simulations of SSD-CI listening (four- or eight-channel noise-vocoded speech in the right ear, unprocessed speech in the left) tested whether acutely simulated asymmetric hearing would produce interference comparable to that previously observed for 13 SSD-CI listeners. Both groups had a wide age range (NH: 20 to 84 years; SSD-CI: 36 to 74 years) and were therefore expected to exhibit a wide range of selective-attention ability. The primary set of conditions measured bilateral speech interference. Target coordinate-response-measure sentences mixed with a masker of similar fundamental frequency (F0) were presented to the right (vocoded) ear at target-to-masker ratios of 0, 4, 8, or 16 dB. Silence or a copy of the masker was presented to the left (unprocessed) ear. Bilateral speech interference-the performance decrease from adding the masker copy to the left ear-was compared with previous SSD-CI results. NH listeners also completed two additional sets of conditions. The first set measured the F0-difference benefit for unprocessed monaural speech-on-speech masking. This is a likely indicator of non-spatial selective-attention ability, based on previous findings that older adults benefit less than younger adults from target-masker F0 differences. The second set measured contralateral-unmasking benefit. Target and masking speech were presented to the unprocessed ear and the benefit from presenting a copy of the masking speech to the vocoded ear was measured. A linear-mixed model analysis examined relationships between NH bilateral speech interference and age, monaural speech-on-speech masking (to estimate non-spatial selective attention), and contralateral unmasking. An additional analysis compared NH-Vocoder to SSD-CI interference.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;The strongest predictor of NH-vocoder interference was performance in the monaural different-F0 speech-on-speech masking condition ( p = 0.0024). Neither similar-F0 speech-on-speech masking performance, nor age, nor contralateral unmasking accounted for significant additional variance ( p = 0.11 to 0.69). Mean SSD-CI interference magnitud","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1490-1501"},"PeriodicalIF":2.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144327829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial Position Modulates the Benefits of Auditory Inputs for Postural Control. 空间位置调节听觉输入对姿势控制的益处。
IF 2.8 2区 医学
Ear and Hearing Pub Date : 2025-11-01 Epub Date: 2025-07-17 DOI: 10.1097/AUD.0000000000001707
Daniel Paromov, Maxime Maheu, Benoit-Antoine Bacon, François Champoux
{"title":"Spatial Position Modulates the Benefits of Auditory Inputs for Postural Control.","authors":"Daniel Paromov, Maxime Maheu, Benoit-Antoine Bacon, François Champoux","doi":"10.1097/AUD.0000000000001707","DOIUrl":"10.1097/AUD.0000000000001707","url":null,"abstract":"<p><strong>Objectives: </strong>The study aimed to examine the contribution of the position of a sound source to static postural control. The authors hypothesized that in line with the auditory anchorage theory, more benefits would be observed when sounds are positioned in easy-to-localize locations.</p><p><strong>Design: </strong>A force plate was used to measure sway area, sway velocity, and standard deviation in 23 participants. Auditory stimuli were presented at various azimuth angles (0°, 45°, 90°), and their effects were compared with a silent baseline condition without any added auditory input.</p><p><strong>Results: </strong>The present results revealed a significant improvement in sway parameters when auditory inputs were added. However, in contrast to the 0° and 45° locations, the 90° location did not affect sway area and SD when compared with the condition without auditory input. Improvement was observed across all the locations of the auditory inputs for sway velocity.</p><p><strong>Conclusion: </strong>These findings support the auditory anchorage theory, suggesting that auditory objects positioned in areas that are easy to localize contribute more effectively to postural stabilization.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1674-1678"},"PeriodicalIF":2.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncovering Phenotypes in Sensorineural Hearing Loss: A Systematic Review of Unsupervised Machine Learning Approaches. 揭示感音神经性听力损失的表型:对无监督机器学习方法的系统回顾。
IF 2.8 2区 医学
Ear and Hearing Pub Date : 2025-11-01 Epub Date: 2025-08-07 DOI: 10.1097/AUD.0000000000001696
Lilia Dimitrov, Liam Barrett, Aizaz Chaudhry, Jameel Muzaffar, Watjana Lilaonitkul, Nishchay Mehta
{"title":"Uncovering Phenotypes in Sensorineural Hearing Loss: A Systematic Review of Unsupervised Machine Learning Approaches.","authors":"Lilia Dimitrov, Liam Barrett, Aizaz Chaudhry, Jameel Muzaffar, Watjana Lilaonitkul, Nishchay Mehta","doi":"10.1097/AUD.0000000000001696","DOIUrl":"10.1097/AUD.0000000000001696","url":null,"abstract":"<p><strong>Objectives: </strong>The majority of the 1.5 billion people living with hearing loss are affected by sensorineural hearing loss (SNHL). Reliably categorizing these individuals into distinct subtypes remains a significant challenge, which is a critical step for developing tailored treatment approaches. Unsupervised machine learning, a branch of artificial intelligence (AI), offers a promising solution to this issue. However, no study has yet compared the outcomes of different AI models in this context. The purpose of this review is to synthesize the existing literature on the application of unsupervised machine learning models to hearing health data for identifying subtypes of SNHL.</p><p><strong>Design: </strong>A systematic search was performed of the following databases: MEDLINE, PsycINFO (Ovid version), EMBASE, CINAHL, IEEE, and Scopus as well as a search of grey literature using GitHub and Base, and manual search (Jan 1990-Mar 2024). Studies were included only if they reported on adult patients with SNHL and used an unsupervised machine-learning approach. Quality assessment was performed using the APPRAISE-AI tool. The heterogeneity of studies necessitated a narrative synthesis of the results.</p><p><strong>Results: </strong>Seven studies were included in the analysis. Apart from one case-control study, all were cohort studies. Four different algorithms were used, with no study comparing the performance of more than one algorithm. Across these studies, only 2 distinct numbers of subtypes were identified: 4 and 11. However, the overall quality of the studies was deemed low, thus preventing definitive conclusions regarding model selection and the actual number of subtypes.</p><p><strong>Conclusions: </strong>This systematic review identifies key methodological practices that need to be improved before the potential of unsupervised machine learning models to subtype SNHL can be realized. Future research in this field should justify model selection, ensure reproducibility, use high-quality hearing data, and validate model findings.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1401-1411"},"PeriodicalIF":2.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12533775/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144796178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resting-State Functional Connectivity Predicts Cochlear-Implant Speech Outcomes. 静息状态功能连接预测人工耳蜗语音结果。
IF 2.8 2区 医学
Ear and Hearing Pub Date : 2025-11-01 Epub Date: 2025-06-17 DOI: 10.1097/AUD.0000000000001678
Jamal Esmaelpoor, Tommy Peng, Beth Jelfs, Darren Mao, Maureen J Shader, Colette M McKay
{"title":"Resting-State Functional Connectivity Predicts Cochlear-Implant Speech Outcomes.","authors":"Jamal Esmaelpoor, Tommy Peng, Beth Jelfs, Darren Mao, Maureen J Shader, Colette M McKay","doi":"10.1097/AUD.0000000000001678","DOIUrl":"10.1097/AUD.0000000000001678","url":null,"abstract":"","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1679"},"PeriodicalIF":2.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144310844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech-in-Noise Ability and Signal to Noise Ratio Predict the Timing of Hearing-Impaired Listeners' Intertalker Saccades When Observing Conversational Turn-Taking: An Explorative Investigation. 噪声中的言语能力和信噪比预测听障听者在观察会话转向时的扫视时间:一项探索性研究。
IF 2.8 2区 医学
Ear and Hearing Pub Date : 2025-11-01 Epub Date: 2025-07-15 DOI: 10.1097/AUD.0000000000001701
Martha M Shiell, Sergi Rotger-Griful, Martin A Skoglund, Gitte Keidser, Johannes Zaar
{"title":"Speech-in-Noise Ability and Signal to Noise Ratio Predict the Timing of Hearing-Impaired Listeners' Intertalker Saccades When Observing Conversational Turn-Taking: An Explorative Investigation.","authors":"Martha M Shiell, Sergi Rotger-Griful, Martin A Skoglund, Gitte Keidser, Johannes Zaar","doi":"10.1097/AUD.0000000000001701","DOIUrl":"10.1097/AUD.0000000000001701","url":null,"abstract":"<p><strong>Objectives: </strong>We explored the hypothesis that, when listeners visually follow the turn-taking of talkers engaged in a conversation, the timing of their eye movements is related to their ability to follow the conversation.</p><p><strong>Design: </strong>We made use of a re-purposed dataset where adults with hearing impairment (N = 17), assisted by hearing aids, observed audiovisual recordings of dyadic conversations presented via a television screen and loudspeakers. The recordings were presented with multitalker babble noise at four signal to noise ratios (SNRs), in 4-dB steps ranging from -4 to 8 dB, to modulate the participants' ability to follow the conversation. We extracted time windows around conversation floor transfers (FTs) in the stimulus where participants reacted by moving their gaze from one talker to the next, termed FT-intertalker saccades (ITS). We recorded the timing of this eye movement relative to the onset of the new talker's speech. In addition, participants completed a separate word-recognition test to measure their speech perception in noise (SPIN) ability at the same SNRs as used for the conversation stimuli. We predicted that the timing of FT-ITS would be delayed with difficult SNR levels and for listeners with low SPIN ability. The effect of SPIN ability was tested first as a continuous variable, and subsequently with participants divided into high and low SPIN-ability groups.</p><p><strong>Results: </strong>Multilevel linear modeling showed that the timing of FT-ITS was predicted by SNR condition and SPIN group, but no effect was found for SPIN ability as a continuous variable. Post hoc comparisons (uncorrected for multiple comparisons) indicated that delayed FT-ITS were associated with low SPIN ability, and both the hardest and easiest SNR conditions. The full model accounted for 34.5% of the variance in the data, but the fixed effects of SPIN and SNR together accounted for only 2.3%.</p><p><strong>Conclusions: </strong>Although the results should be interpreted with caution due to limitations in the experiment design, they provide preliminary support that FT-ITS timing can be used as a measure of hearing-impaired listeners' ability to follow a conversation. This first exploration of this question can serve future studies on this topic, providing guidance on the range of perceptual difficulty where this measure may be sensitive, and recommending a modeling approach that takes into account differences between stimuli.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1652-1660"},"PeriodicalIF":2.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12533768/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144638681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信