Trends in Hearing最新文献

筛选
英文 中文
Functional Plasticity in Auditory and Visual Discrimination Processing in Patients with Single-Sided Deafness: An EEG Study. 单侧耳聋患者听觉和视觉辨别加工的功能可塑性:一项脑电图研究。
IF 3 2区 医学
Trends in Hearing Pub Date : 2026-01-01 Epub Date: 2026-01-19 DOI: 10.1177/23312165251413850
Qiaoyu Liu, Yufei Qiao, Min Zhu, Jiayan Yang, Wen Sun, Yaohan Chen, Saiyi Jiao, Hang Shen, Yingying Shang
{"title":"Functional Plasticity in Auditory and Visual Discrimination Processing in Patients with Single-Sided Deafness: An EEG Study.","authors":"Qiaoyu Liu, Yufei Qiao, Min Zhu, Jiayan Yang, Wen Sun, Yaohan Chen, Saiyi Jiao, Hang Shen, Yingying Shang","doi":"10.1177/23312165251413850","DOIUrl":"10.1177/23312165251413850","url":null,"abstract":"<p><p>Single-sided deafness (SSD) is a typical condition of partial auditory deprivation. Total auditory deprivation triggers cross-modal neural reorganization, but in patients with partial hearing deprivation, how residual auditory function is balanced with the compensatory plasticity of other sensory modalities remains unclear. Previous studies have reported conflicting findings, potentially due to differences in study populations or task designs. Here, we investigated hierarchical neural processing in a homogeneous cohort of 37 congenital SSD patients (31.6 ± 6.5 years, 18 males) and 32 normal-hearing (NH) controls (30.6 ± 7.3 years, 14 males) using both auditory and visual oddball tasks with electroencephalography (EEG). In the auditory task, SSD patients presented reduced amplitudes of early exogenous components (N1, P2) and mismatch negativity (MMN), but preserved late endogenous components (N2, P3), compared with NH controls. Conversely, in the visual task, SSD patients presented increased early visual N1 amplitudes with intact visual mismatch negativity (vMMN) and endogenous components (N2, P3). No latency differences in the above components were observed. These results reveal a difference in plasticity between lower- and higher-level processing. Our findings indicate that functional plasticity in SSD patients occurs predominantly at sensory stages and is characterized by diminished auditory and compensatory elevated visual neural activity, whereas higher-level discrimination processing in either modality is largely unaffected. These findings clarify prior discrepancies, establish a hierarchical framework for understanding neuroplasticity in partial sensory deprivation, and have implications for rehabilitation strategies for SSD patients.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251413850"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12816557/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145999400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying Hearing Difficulty Moments in Conversational Audio. 识别会话音频中的听力困难时刻。
IF 3 2区 医学
Trends in Hearing Pub Date : 2026-01-01 Epub Date: 2026-04-23 DOI: 10.1177/23312165261446379
Jack Collins, Adrian Buzea, Chris Collier, Alejandro Ballesta Rosen, Julian Maclaren, Richard F Lyon, Kelly Miles, Simon Carlile
{"title":"Identifying Hearing Difficulty Moments in Conversational Audio.","authors":"Jack Collins, Adrian Buzea, Chris Collier, Alejandro Ballesta Rosen, Julian Maclaren, Richard F Lyon, Kelly Miles, Simon Carlile","doi":"10.1177/23312165261446379","DOIUrl":"https://doi.org/10.1177/23312165261446379","url":null,"abstract":"<p><p>Individuals regularly experience <i>Hearing Difficulty Moments</i> in everyday conversation. Identifying Hearing Difficulty Moments has particular significance in the field of hearing assistive technology where timely interventions are key for real-time hearing assistance. In this article, we propose and compare machine learning solutions for the temporal detection of segments containing Hearing Difficulty Moments in conversational audio. We show that audio language models, through their multimodal reasoning capabilities, can achieve state-of-the-art results for this task, significantly outperforming a simple automatic speech recognition (ASR) hotword heuristic and a more conventional fine-tuning approach with Wav2Vec, an audio-only input architecture that is state-of-the-art for ASR.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165261446379"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13121483/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147786448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benefits of Bilateral Bone Conduction Device Use Including Osia Devices in Children and Adolescents With Bilateral Atresia. 双侧骨传导装置包括Osia装置对双侧闭锁儿童和青少年的益处。
IF 3 2区 医学
Trends in Hearing Pub Date : 2026-01-01 Epub Date: 2026-02-18 DOI: 10.1177/23312165261422955
Robel Z Alemu, Alan Blakeman, Jaina Negandhi, Blake C Papsin, Sharon L Cushing, Karen A Gordon
{"title":"Benefits of Bilateral Bone Conduction Device Use Including Osia Devices in Children and Adolescents With Bilateral Atresia.","authors":"Robel Z Alemu, Alan Blakeman, Jaina Negandhi, Blake C Papsin, Sharon L Cushing, Karen A Gordon","doi":"10.1177/23312165261422955","DOIUrl":"10.1177/23312165261422955","url":null,"abstract":"<p><p>This study aimed to characterize effects of bilateral bone conduction devices (BCD) including the Cochlear™ Osia<sup>®</sup> (Osia) and the Cochlear™ percutaneous Baha<sup>®</sup> Connect System (Baha) on localization of stationary and moving sound in children and adolescents with bilateral atresia. Participants were 11 listeners with BCDs [<i>M</i><sub>Age</sub>(SD) = 14.7(3.5) years] and 11 age-matched controls [<i>M</i><sub>Age</sub>(SD) = 14.9(1.9) years]. Outcomes were word recognition in quiet and noise, spatial release from masking (SRM) [spondee-word recognition thresholds in noise at co-located/0° or separated (90° left/right) positions], self-reported hearing using the Speech, Spatial and Qualities of Hearing Scale (SSQ), and localization of stationary and moving sound with tracking of real-time unrestricted head movements. BCD users had reduced speech perception accuracy in noise during unilateral listening (<i>p</i> < .001) and higher speech recognition thresholds than controls (<i>p</i> = .001). BCD users had higher errors than controls during stationary (<i>p</i> < .001) and moving (<i>p</i> < .001) sound localization consistent with self-reported spatial hearing challenges. BCD users had significantly reduced errors during bilateral use compared to unilateral use for stationary (<i>p</i> < .01) but not always for moving (right unilateral: <i>p</i> < .01; left unilateral: <i>p</i> = .46) sound localization. BCD users spent less time moving their heads in the correct direction compared to controls for stationary and moving sound localization (<i>p</i> < .01). Results indicate that children and adolescents with BCDs demonstrate improved localization of stationary but not moving sound-sources, with bilateral device use compared to unilateral use. This finding provides evidence for some access to binaural cues and mitigation of head shadow despite transcranial attenuation, but ineffective use of head movements.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165261422955"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921180/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146221675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Monaural Exposures to Reveal Early Effects of Noise: Evidence from Police Radio Ear-Piece Use. 利用单耳暴露揭示噪音的早期影响:来自警察使用无线电耳机的证据。
IF 3 2区 医学
Trends in Hearing Pub Date : 2026-01-01 Epub Date: 2026-01-30 DOI: 10.1177/23312165251410988
Hannah Guest, Paul Elliott, Martie van Tongeren, Joseph Laycock, Steven Thorley-Lawson, Michael A Stone, Michael T Loughran, Christopher J Plack
{"title":"Leveraging Monaural Exposures to Reveal Early Effects of Noise: Evidence from Police Radio Ear-Piece Use.","authors":"Hannah Guest, Paul Elliott, Martie van Tongeren, Joseph Laycock, Steven Thorley-Lawson, Michael A Stone, Michael T Loughran, Christopher J Plack","doi":"10.1177/23312165251410988","DOIUrl":"10.1177/23312165251410988","url":null,"abstract":"<p><p>Research into the long-term effects of noise on hearing is often confounded by health and lifestyle differences between individuals. UK police radio ear-pieces are capable of emitting high sound levels and, crucially, are worn in one ear, allowing between-ear comparisons which control for individual-level confounding factors. Low volume-control settings are recommended to reduce risk to police hearing, yet actual usage patterns and auditory effects remain unexamined. This study used a large-scale survey (<i>N</i> = 4,498) to assess ear-piece noise exposure and the associated hearing health. Most participants reported using high volume-control settings and 45.2% reported experiencing signs of temporary threshold shift (TTS) in the exposed ear. Estimated weekly-averaged noise exposures frequently exceeded the UK's 85 dBA Upper Exposure Action Value. Ear-piece use was associated with 73% (95% confidence interval [CI] 46-106%) increased risk of persistent tinnitus, which on mediation analysis appeared to be driven by a subset of users who experienced signs of TTS. Importantly, tinnitus location was associated with the side of exposure, suggesting tinnitus related to device use rather than to other factors. In contrast, Digits-In-Noise thresholds showed no relation with noise exposure; potential explanations include compensatory auditory training effects, but limitations of Digits-In-Noise data must also be considered. Findings highlight a need for further investigation into hearing risks in police personnel, including in-person auditory testing. Risk mitigation strategies might involve improved device design, training on safe use, and expanded hearing health surveillance. Given the potential for cumulative auditory damage, TTS may serve as an early warning sign, warranting attention in broader noise-exposed populations.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251410988"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12858745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The First Cadenza Challenge: Perceptual Evaluation of Machine Learning Systems to Improve Audio Quality of Popular Music for Those with Hearing Loss. 第一个华彩挑战:机器学习系统的感知评估,以提高听力损失者流行音乐的音频质量。
IF 3 2区 医学
Trends in Hearing Pub Date : 2026-01-01 Epub Date: 2026-01-30 DOI: 10.1177/23312165251408761
Scott Bannister, Jennifer Firth, Gerardo Roa-Dabike, Rebecca Vos, William Whitmer, Alinka E Greasley, Simone Graetzer, Bruno Fazenda, Trevor Cox, Jon Barker, Michael A Akeroyd
{"title":"The First Cadenza Challenge: Perceptual Evaluation of Machine Learning Systems to Improve Audio Quality of Popular Music for Those with Hearing Loss.","authors":"Scott Bannister, Jennifer Firth, Gerardo Roa-Dabike, Rebecca Vos, William Whitmer, Alinka E Greasley, Simone Graetzer, Bruno Fazenda, Trevor Cox, Jon Barker, Michael A Akeroyd","doi":"10.1177/23312165251408761","DOIUrl":"10.1177/23312165251408761","url":null,"abstract":"<p><p>Music is central to many people's lives, and hearing loss (HL) is often a barrier to musical engagement. Hearing aids (HAs) help, but their efficacy in improving speech does not consistently translate to music. This research evaluated systems submitted to the 1<sup>st</sup> Cadenza Machine Learning Challenge, where entrants aimed to improve music audio quality for HA users through source separation and remixing. The HA users (<i>N</i> = 53, ranging from \"mild\" to \"moderately severe\" HL) assessed eight challenge systems (including one baseline using the HDemucs source separation algorithm, remixing to original mixes of music samples, and applying National Acoustic Laboratories Revised amplification) and rated 200 music samples processed for their HL. Participants rated samples on <i>basic audio quality, clarity, harshness, distortion, frequency balance</i>, and <i>liking</i>. Results suggest no entrant system surpassed the baseline for audio quality, although differences emerged in system efficacy across HL severities. <i>Clarity</i> and <i>distortion</i> ratings were most predictive of audio quality. Finally, some systems produced signals with higher objective loudness, spectral flux and clipping with increasing HL severity; these received lower audio quality ratings by listeners with moderately severe HL. Findings highlight how music enhancement requires varied solutions and tests across a range of HL severities. This challenge provided a first application of source separation to music listening with HL. However, state-of-the-art source separation algorithms limited the diversity of entrant solutions, resulting in no improvements over the baseline; to promote development of innovative processing strategies, future work should increase complexity of music listening scenarios to be addressed through source separation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165251408761"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12858752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Gestures of the Head and Eyebrows Support Prosody Perception for Individuals with Cochlear Implants. 头部和眉毛的视觉手势支持人工耳蜗植入者的韵律感知。
IF 3 2区 医学
Trends in Hearing Pub Date : 2026-01-01 Epub Date: 2026-03-30 DOI: 10.1177/23312165261437901
Justin T Fleming, Harley J Wheeler, Matthew B Winn
{"title":"Visual Gestures of the Head and Eyebrows Support Prosody Perception for Individuals with Cochlear Implants.","authors":"Justin T Fleming, Harley J Wheeler, Matthew B Winn","doi":"10.1177/23312165261437901","DOIUrl":"10.1177/23312165261437901","url":null,"abstract":"<p><p>Visual gestures, especially head movements and eyebrow raises, are time-locked to acoustic cues during the expression of spoken prosody. The present study examined the role these visual cues play in prosody perception, particularly for individuals with cochlear implants (CIs), who often experience challenges understanding prosody due to reduced access to pitch cues. A vocal mimicry paradigm was used to obtain granular, objective measures of prosody perception through acoustic analysis of mimicked sentences. Stimuli consisted of audio-visual recordings from one talker that captured naturally occurring variability in the expression of auditory and visual cues to word focus. Participants mimicked these natural recordings, as well as prosody-transplanted stimuli that allowed us to isolate the influence of auditory or visual prosody cues while holding the other modality at a neutral level (broad focus). Participants converted visual prosody cues into acoustic correlates of prosody in their mimicry, repeating acoustically unfocused words with higher F0 and intensity when those words were paired with video containing head and eyebrow gestures. This visual influence was stronger for participants with CIs than an age-matched group of typical-hearing listeners. CI participants who were less successful at acoustic mimicry tended to be more influenced by visual cues. These results indicate that CI listeners compensate for degraded auditory cues by integrating visual gestures into their perception of spoken prosody, potentially highlighting new targets for multisensory counseling or training.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165261437901"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13039648/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147581924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cognitive Shifting Ability Does Not Predict Self-Perceived Hearing Difficulties in Adult Hearing-Aid Users. 认知转移能力不能预测成年助听器使用者自我感知的听力困难。
IF 3 2区 医学
Trends in Hearing Pub Date : 2026-01-01 Epub Date: 2026-03-17 DOI: 10.1177/23312165261433705
Francesca Molinari Luccini, Henrik Danielsson, Victoria Stenbäck, Elaine Hoi Ning Ng, Emil Holmer
{"title":"Cognitive Shifting Ability Does Not Predict Self-Perceived Hearing Difficulties in Adult Hearing-Aid Users.","authors":"Francesca Molinari Luccini, Henrik Danielsson, Victoria Stenbäck, Elaine Hoi Ning Ng, Emil Holmer","doi":"10.1177/23312165261433705","DOIUrl":"10.1177/23312165261433705","url":null,"abstract":"<p><p>Age-related hearing loss (ARHL) often leads to hearing difficulties, impacting communication and daily functioning even among hearing-aid users. While hearing loss and cognitive functions, such as cognitive shifting ability, have been proposed as predictors of hearing difficulties, their specific contributions remain unclear. This study investigated whether hearing loss and cognitive shifting ability predict self-reported hearing difficulties across the Speech, Spatial, and Qualities of Hearing Scale questionnaire (SSQ) subscales in adults with ARHL who use hearing aids, and whether sex moderates these associations, while controlling for age and level of education. A total of 215 adults underwent audiometry, completed a cognitive flexibility task, and answered the SSQ questionnaire, of which 203 (89 females) were included in our analysis. Hierarchical multiple regression analyses revealed that less hearing loss predicted lower levels of hearing difficulties in the three subscales of the SSQ, and higher education level was a significant predictor of less reported difficulties in the Speech and Spatial subscales. Contrary to our expectations, cognitive shifting ability was not associated with hearing difficulties in any subscale, nor did sex moderate the associations between cognitive shifting ability, degree of hearing loss and hearing difficulties. The findings highlight the influence of hearing loss and education on self-reported hearing difficulties and suggest that cognitive shifting ability does not play a significant role. Future studies should explore other cognitive and demographic factors that might contribute to hearing difficulties in hearing-aid users.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165261433705"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13009968/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147469852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond the Moment: How EMA Reporting Periods Affect Sampled Situations and Sensitivity to Hearing Aid Differences. 超越时刻:EMA报告期如何影响采样情况和对助听器差异的敏感性。
IF 3 2区 医学
Trends in Hearing Pub Date : 2026-01-01 Epub Date: 2026-03-07 DOI: 10.1177/23312165261421698
Petra von Gablenz, Inga Holube, Nadja Schinkel-Bielefeld
{"title":"Beyond the Moment: How EMA Reporting Periods Affect Sampled Situations and Sensitivity to Hearing Aid Differences.","authors":"Petra von Gablenz, Inga Holube, Nadja Schinkel-Bielefeld","doi":"10.1177/23312165261421698","DOIUrl":"10.1177/23312165261421698","url":null,"abstract":"<p><p>Ecological Momentary Assessment (EMA) is a timely method for capturing differences between hearing aids (HAs) or HA features in real-world environments. Studies vary greatly in reporting periods, specifically how long ago an event can have occurred to still be reported and how events are selected when not summarizing over a period of time. The potential effects of different reporting periods on HA or HA feature contrast remain unexplored. In a 14-day EMA study, 22 hearing-aid users assessed both a basic and an advanced HA program, which were switched daily without participant control. Several times daily, participants used a smartphone app to report on satisfaction with the HA program, overall listening experience, sound quality, and listening effort. The app had participants focus on the current situation, denoting a momentary reporting period, and the worst listening experience within the previous 30 min, denoting a short-term retrospective period. Participants also completed an end-of-day questionnaire. Sound-pressure levels and HA classifier data were recorded continuously. Mixed modeling was used to examine the impact of reporting periods on ratings. Main findings showed no rating differences between the two HA programs in momentary or end-of-day assessments. However, differences emerged in short-term retrospective reporting, thus in the assessments of the worst experience within the preceding 30 min. Various time effects were also observed. Depending on the reporting period, the analysis of sound-pressure levels and HA classifier data revealed variations in real-world snapshots. In conclusion, this study underscores the need to diligently define reporting periods in hearing-related EMA research.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165261421698"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12967378/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147370529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing Visual Contributions to the Perception of Speech in Noise. 评估视觉对噪音环境下语音感知的影响。
IF 3 2区 医学
Trends in Hearing Pub Date : 2026-01-01 Epub Date: 2026-03-16 DOI: 10.1177/23312165261428755
Lida C Alampounti, Hannah Cooper, Stuart Rosen, Jennifer K Bizley
{"title":"Assessing Visual Contributions to the Perception of Speech in Noise.","authors":"Lida C Alampounti, Hannah Cooper, Stuart Rosen, Jennifer K Bizley","doi":"10.1177/23312165261428755","DOIUrl":"10.1177/23312165261428755","url":null,"abstract":"<p><p>Investigations of the role of audiovisual integration in speech-in-noise perception have largely focused on the benefits provided by lipreading cues. Nonetheless, audiovisual temporal coherence can offer a complementary advantage in auditory selective attention tasks. We developed an audiovisual speech-in-noise test to assess the benefit of visually-conveyed phonetic information and visual contributions to auditory streaming. The test was a video version of the Children's Coordinate Response Measure with a noun as the second keyword (vCCRMn). The vCCRMn allowed us to measure speech reception thresholds in the presence of two competing talkers under three visual conditions: a full naturalistic video (AV), a video which was interrupted during the target word presentation (Inter), thus, providing no lipreading cues, and a static image of a talker with audio (A). In each case, the video/image could display either the target talker or one of the two competing maskers. We assessed speech reception thresholds in each visual condition in 37 young (≤35 years old) normal-hearing participants. Lipreading ability was independently assessed with the test of adult speechreading (TAS). Results showed that both target-coherent AV and Inter visual conditions offer participants a listening benefit over the static image with audio condition. Target coherent visual information provided the greatest listening advantage in the full audiovisual condition, but a robust advantage was also seen in the interrupted condition, where listeners were unable to lipread the target words. Together, our results are consistent with visual information providing multiple benefits to listening, through lipreading and enhanced auditory streaming.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165261428755"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13009578/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147469836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterizing Perception of Impulse Sounds Through Subjective Ratings and Pupillometric Responses. 通过主观评分和瞳孔测量反应来表征脉冲声音的感知。
IF 3 2区 医学
Trends in Hearing Pub Date : 2026-01-01 Epub Date: 2026-04-30 DOI: 10.1177/23312165261446374
Luca Wiederschein, Anna Schließ, Florian Denk, Hendrik Husstedt
{"title":"Characterizing Perception of Impulse Sounds Through Subjective Ratings and Pupillometric Responses.","authors":"Luca Wiederschein, Anna Schließ, Florian Denk, Hendrik Husstedt","doi":"10.1177/23312165261446374","DOIUrl":"https://doi.org/10.1177/23312165261446374","url":null,"abstract":"<p><p>Impulse sounds such as clinking dishes or slamming objects are often perceived as particularly intense or uncomfortable, yet their perceptual characterization remains insufficiently understood. The present study systematically examined subjective and physiological responses to ecologically valid impulse sounds in young normal-hearing adults. Twenty-seven participants rated nine impulse sounds presented at peak levels between 80 and 120 dB SPL in three acoustic conditions: anechoic, reverberant (room-convolved), and anechoic embedded in the International Speech Test Signal (ISTS) at 65 dB SPL. Loudness and discomfort were assessed using categorical rating scales, and pupil dilation was recorded as an index of autonomic arousal. Both perceptual scales followed Stevens-type growth functions. Loudness increased gradually with level, whereas discomfort showed a delayed onset, but steeper growth once activated. Test-retest reliability was excellent for both scales (ICC ≈0.88). Acoustic condition significantly influenced perception: reverberant stimuli yielded higher perceived intensity and lower 50% thresholds than anechoic presentations for most impulse types, while embedding impulses in speech produced comparatively small effects. Mean pupil dilation increased with presentation level and was significantly associated with both loudness and discomfort ratings. Linear mixed-effects modeling demonstrated that subjective ratings explained pupil dilation more strongly than physical level once both were included in the model. Neither uncomfortable loudness levels nor self-reported sound sensitivity significantly predicted ratings or physiological responses. These findings provide a systematic characterization of impulse sound perception in a normal-hearing population and demonstrate a close correspondence between subjective intensity judgments and autonomic responses under controlled laboratory conditions.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"30 ","pages":"23312165261446374"},"PeriodicalIF":3.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147786428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书