Ear and HearingPub Date : 2025-03-01Epub Date: 2024-11-25DOI: 10.1097/AUD.0000000000001586
Uğur Belet, Ateş Mehmet Akşit, Ebru Kösemihal
{"title":"Comparison of LS CE-Chirp and Click Stimuli in Auditory Brainstem Responses in High-Frequency Hearing Loss.","authors":"Uğur Belet, Ateş Mehmet Akşit, Ebru Kösemihal","doi":"10.1097/AUD.0000000000001586","DOIUrl":"10.1097/AUD.0000000000001586","url":null,"abstract":"<p><strong>Objectives: </strong>The auditory brainstem response (ABR) is an evoked potential used to estimate the hearing thresholds and identify potential auditory pathologies. Although a click stimulus is generally used as an auditory stimulus in diagnostics, recent reports show that the Level-Specific CE-Chirp (LS CE-Chirp) stimulus can also be used for clinical diagnosis. In this study, we compared the auditory brainstem test outcomes of the LS CE-Chirp stimulus and the click stimulus in individuals with high-frequency hearing loss (HFHL).</p><p><strong>Design: </strong>Patients with HFHL (n = 30) and individuals with normal hearing (n = 30) were included in the study. Audiometric pure-tone thresholds were determined for all subjects at 250 to 8000 Hz. For individuals with normal hearing, the pure-tone thresholds were required to be ≤20 dB HL for all frequencies. HFHL cases were selected from people with at least 5 years of hunting experience. All subjects were tested with ABR at 80 and 60 dB nHL. The ABR test was performed using click and LS CE-Chirp stimuli at a rate of 11.1/sec. ABR wave I, III, and V peak latencies and I to V interpeak latency values were compared within and among the groups.</p><p><strong>Results: </strong>Longer latency values were obtained with the LS CE-Chirp stimulus at 80 dB nHL intensity and 11.1/sec stimulus frequency than with the click stimulus in the control group. No significant difference was detected between the LS CE-Chirp and click stimuli at the 80 dB nHL intensity level in the HFHL group ( p > 0.005). When the HFHL patients were classified according to the 4000 Hz threshold, the click stimulus was found to be more compatible with the behavioral 4000 Hz threshold.</p><p><strong>Conclusions: </strong>The wave latency values obtained with the LS CE-Chirp stimulus in the HFHL group, unlike with the click stimulation, were less affected by the level of hearing loss in the HFHL group. For this difference to have a diagnostic value, further studies would be needed on patients with different pathologies and hearing loss configurations.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"347-352"},"PeriodicalIF":2.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142711579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-03-01Epub Date: 2024-09-20DOI: 10.1097/AUD.0000000000001591
Tetsuaki Kawase, Chie Obuchi, Jun Suzuki, Yukio Katori, Shuichi Sakamoto
{"title":"Masking Effects Caused by Contralateral Distractors in Participants With Versus Without Listening Difficulties.","authors":"Tetsuaki Kawase, Chie Obuchi, Jun Suzuki, Yukio Katori, Shuichi Sakamoto","doi":"10.1097/AUD.0000000000001591","DOIUrl":"10.1097/AUD.0000000000001591","url":null,"abstract":"<p><strong>Objectives: </strong>To examine the effects of distractor sounds presented to the contralateral ear on speech intelligibility in patients with listening difficulties without apparent peripheral pathology and in control participants.</p><p><strong>Design: </strong>This study examined and analyzed 15 control participants (age range, 22 to 30 years) without any complaints of listening difficulties and 15 patients (age range, 15 to 33 years) diagnosed as having listening difficulties without apparent peripheral pathology in the outpatient clinic of the Department of Otolaryngology-Head and Neck Surgery, Tohoku University Hospital. Speech intelligibility for 50 Japanese monosyllables presented to the right ear was examined under the following three different conditions: \"without contralateral sound,\" \"with continuous white noise in the contralateral ear,\" and \"with music stimuli in the contralateral ear.\"</p><p><strong>Results: </strong>The results indicated the following: (1) speech intelligibility was significantly worse in the patient group with contralateral music stimuli and noise stimuli; (2) speech intelligibility was significantly worse with contralateral music stimuli than with contralateral noise stimuli in the patient group; (3) there was no significant difference in speech intelligibility among three contralateral masking conditions (without contra-stimuli, with contra-noise, and with contra-music) in the control group, although average and median values of speech intelligibility tended to be worse with contralateral music stimuli than without contralateral stimuli.</p><p><strong>Conclusions: </strong>Significantly larger masking effects due to a contralateral distractor sound observed in patients with listening difficulties without apparent peripheral pathology may suggest the possible involvement of masking mechanisms other than the energetic masking mechanism occurring in the periphery in these patients. In addition, it was also shown that the masking effect is more pronounced with real environmental sounds, that is, music with lyrics, than with continuous steady noise, which is often used as a masker for speech-in-noise testing in clinical trials. In other words, it should be noted that a speech-in-noise test using such steady noise may underestimate the degree of listening problems of patients with listening difficulties in their daily lives, and a speech-in-noise test using a masker such as music and/or speech sounds could make listening problems more obvious in patients with listening difficulties.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"393-400"},"PeriodicalIF":2.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-03-01Epub Date: 2024-11-07DOI: 10.1097/AUD.0000000000001598
Megan J Kobel, Andrew R Wagner, Daniel M Merfeld
{"title":"Associations Between Vestibular Perception and Cognitive Performance in Healthy Adults.","authors":"Megan J Kobel, Andrew R Wagner, Daniel M Merfeld","doi":"10.1097/AUD.0000000000001598","DOIUrl":"10.1097/AUD.0000000000001598","url":null,"abstract":"<p><strong>Objectives: </strong>A growing body of evidence has linked vestibular function to the higher-order cognitive ability in aging individuals. Past evidence has suggested unique links between vestibular function and cognition on the basis of end-organ involvement (i.e., otoliths versus canals). However, past studies have only assessed vestibular reflexes despite the diversity of vestibular pathways. Thus, this exploratory study aimed to assess associations between vestibular perception and cognition in aging adults to determine potential relationships.</p><p><strong>Design: </strong>Fifty adults (21 to 84 years; mean = 52.9, SD = 19.8) were included in this cross-sectional study. All participants completed a vestibular perceptual threshold test battery designed to target perception predominantly mediated by each end-organ pair and intra-vestibular integration: 1 Hz y -translation (utricle), 1 Hz z -translation (saccule), 2 Hz yaw rotation (horizontal canals), 2 Hz right anterior, left posterior (RALP), and left anterior, right posterior (LARP) tilts (vertical canals), and 0.5 Hz roll tilt (canal-otolith integration). Participants also completed standard assessments of cognition and path integration: Digit Symbol Substitution Test (DSST), Trail Making Test (TMT), and the Gait Disorientation Test (GDT). Associations were assessed using Spearman rank correlation, and multivariable regression analyses.</p><p><strong>Results: </strong>For correlation analyses, DSST correlated to RALP/LARP tilt, roll tilt, and z -translation. TMT-A only correlated to z -translation, and TMT-B correlated to roll tilt and z -translation after correcting for multiple comparisons. GDT correlated to RALP/LARP tilt and y -translation. In age-adjusted regression analyses, DSST and TMT-B were associated with z -translation thresholds and GDT was associated with y -translation thresholds.</p><p><strong>Conclusions: </strong>In this cross-sectional study, we identified associations between vestibular perceptual thresholds with otolith contributions and standard measures of cognition. These results are in line with past results suggesting unique associations between otolith function and cognitive performance.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"461-473"},"PeriodicalIF":2.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11832344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142592399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-03-01Epub Date: 2024-09-23DOI: 10.1097/AUD.0000000000001596
Andreea Micula, Emil Holmer, Ruijing Ning, Henrik Danielsson
{"title":"Relationships Between Hearing Status, Cognitive Abilities, and Reliance on Visual and Contextual Cues.","authors":"Andreea Micula, Emil Holmer, Ruijing Ning, Henrik Danielsson","doi":"10.1097/AUD.0000000000001596","DOIUrl":"10.1097/AUD.0000000000001596","url":null,"abstract":"<p><strong>Objectives: </strong>Visual and contextual cues facilitate speech recognition in suboptimal listening conditions (e.g., background noise, hearing loss, hearing aid signal processing). Moreover, successful speech recognition in challenging listening conditions is linked to cognitive abilities such as working memory and fluid intelligence. However, it is unclear which cognitive abilities facilitate the use of visual and contextual cues in individuals with normal hearing and hearing aid users. The first aim was to investigate whether individuals with hearing aid users rely on visual and contextual cues to a higher degree than individuals with normal hearing in a speech-in-noise recognition task. The second aim was to investigate whether working memory and fluid intelligence are associated with the use of visual and contextual cues in these groups.</p><p><strong>Design: </strong>Groups of participants with normal hearing and hearing aid users with bilateral, symmetrical mild to severe sensorineural hearing loss were included (n = 169 per group). The Samuelsson and Rönnberg task was administered to measure speech recognition in speech-shaped noise. The task consists of an equal number of sentences administered in the auditory and audiovisual modalities, as well as without and with contextual cues (visually presented word preceding the sentence, e.g.,: \"Restaurant\"). The signal to noise ratio was individually set to 1 dB below the level obtained for 50% correct speech recognition in the hearing-in-noise test administered in the auditory modality. The Reading Span test was used to measure working memory capacity and the Raven test was used to measure fluid intelligence. The data were analyzed using linear mixed-effects modeling.</p><p><strong>Results: </strong>Both groups exhibited significantly higher speech recognition performance when visual and contextual cues were available. Although the hearing aid users performed significantly worse compared to those with normal hearing in the auditory modality, both groups reached similar performance levels in the audiovisual modality. In addition, a significant positive relationship was found between the Raven test score and speech recognition performance only for the hearing aid users in the audiovisual modality. There was no significant relationship between Reading Span test score and performance.</p><p><strong>Conclusions: </strong>Both participants with normal hearing and hearing aid users benefitted from contextual cues, regardless of cognitive abilities. The hearing aid users relied on visual cues to compensate for the perceptual difficulties, reaching a similar performance level as the participants with normal hearing when visual cues were available, despite worse performance in the auditory modality. It is important to note that the hearing aid users who had higher fluid intelligence were able to capitalize on visual cues more successfully than those with poorer fluid intelligence, resulting ","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"433-443"},"PeriodicalIF":2.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11825487/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142301424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-03-01Epub Date: 2024-09-02DOI: 10.1097/AUD.0000000000001582
Tami Harel-Arbeli, Hagit Shaposhnik, Yuval Palgi, Boaz M Ben-David
{"title":"Taking the Extra Listening Mile: Processing Spoken Semantic Context Is More Effortful for Older Than Young Adults.","authors":"Tami Harel-Arbeli, Hagit Shaposhnik, Yuval Palgi, Boaz M Ben-David","doi":"10.1097/AUD.0000000000001582","DOIUrl":"10.1097/AUD.0000000000001582","url":null,"abstract":"<p><strong>Objectives: </strong>Older adults use semantic context to generate predictions in speech processing, compensating for aging-related sensory and cognitive changes. This study aimed to gauge aging-related changes in effort exertion related to context use.</p><p><strong>Design: </strong>The study revisited data from Harel-Arbeli et al. (2023) that used a \"visual-world\" eye-tracking paradigm. Data on efficiency of context use (response latency and the probability to gaze at the target before hearing it) and effort exertion (pupil dilation) were extracted from a subset of 14 young adults (21 to 27 years old) and 13 older adults (65 to 79 years old).</p><p><strong>Results: </strong>Both age groups showed a similar pattern of context benefits for response latency and target word predictions, however only the older adults group showed overall increased pupil dilation when listening to context sentences.</p><p><strong>Conclusions: </strong>Older adults' efficient use of spoken semantic context appears to come at a cost of increased effort exertion.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"315-324"},"PeriodicalIF":2.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-03-01Epub Date: 2024-11-06DOI: 10.1097/AUD.0000000000001602
Dana Bsharat-Maalouf, Jens Schmidtke, Tamar Degani, Hanin Karawani
{"title":"Through the Pupils' Lens: Multilingual Effort in First and Second Language Listening.","authors":"Dana Bsharat-Maalouf, Jens Schmidtke, Tamar Degani, Hanin Karawani","doi":"10.1097/AUD.0000000000001602","DOIUrl":"10.1097/AUD.0000000000001602","url":null,"abstract":"<p><strong>Objectives: </strong>The present study aimed to examine the involvement of listening effort among multilinguals in their first (L1) and second (L2) languages in quiet and noisy listening conditions and investigate how the presence of a constraining context within sentences influences listening effort.</p><p><strong>Design: </strong>A group of 46 young adult Arabic (L1)-Hebrew (L2) multilinguals participated in a listening task. This task aimed to assess participants' perceptual performance and the effort they exert (as measured through pupillometry) while listening to single words and sentences presented in their L1 and L2, in quiet and noisy environments (signal to noise ratio = 0 dB).</p><p><strong>Results: </strong>Listening in quiet was easier than in noise, supported by both perceptual and pupillometry results. Perceptually, multilinguals performed similarly and reached ceiling levels in both languages in quiet. However, under noisy conditions, perceptual accuracy was significantly lower in L2, especially when processing sentences. Critically, pupil dilation was larger and more prolonged when listening to L2 than L1 stimuli. This difference was observed even in the quiet condition. Contextual support resulted in better perceptual performance of high-predictability sentences compared with low-predictability sentences, but only in L1 under noisy conditions. In L2, pupillometry showed increased effort when listening to high-predictability sentences compared with low-predictability sentences, but this increased effort did not lead to better understanding. In fact, in noise, speech perception was lower in high-predictability L2 sentences compared with low-predictability ones.</p><p><strong>Conclusions: </strong>The findings underscore the importance of examining listening effort in multilingual speech processing and suggest that increased effort may be present in multilingual's L2 within clinical and educational settings.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"494-511"},"PeriodicalIF":2.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-03-01Epub Date: 2024-10-18DOI: 10.1097/AUD.0000000000001587
Sarah K Grinn, Dana E Notaro, Jatinder K Shokar, Chin-I Cheng
{"title":"Changes in Auditory Performance Following a Virtual Reality Music Concert.","authors":"Sarah K Grinn, Dana E Notaro, Jatinder K Shokar, Chin-I Cheng","doi":"10.1097/AUD.0000000000001587","DOIUrl":"10.1097/AUD.0000000000001587","url":null,"abstract":"<p><strong>Objectives: </strong>The purpose of this study was to evaluate threshold and suprathreshold auditory risk from a newly popular platform of music concert entertainment; virtual reality (VR) headsets. Recreational noise exposure to music is the primary source of hearing hazard in young-adults, with noise doses of in-person concert venues and music festivals well in excess of the recommended daily exposure recommendation from the National Institute for Occupational Safety and Health. While research on the relationship between personal music players and noise-induced hearing loss risk is abundant, no study has yet evaluated noise-induced hearing loss risk from VR headsets, which are newest to the commercial market at this time.</p><p><strong>Design: </strong>Thirty-one young-adult participants (18 to 25 years) with normal-hearing sensitivity (0 to 16 dB HL) experienced a VR music concert and participated in three data collection timepoints: Session A preexposure, Session A post-exposure, and Session B post-exposure. Participants underwent baseline testing for audiometry (0.25 to 20 kHz), distortion product otoacoustic emission testing (1 to 10 kHz), and Words-in-Noise testing. Participants then wore a commercially available VR headset (Meta Quest 2) and experienced a freely available online VR music concert (via the video-sharing website \"YouTube\"). The VR music concert duration was 90 min set to maximum volume, which yielded an average sound level equivalent of 78.7 dBA, max sound level of 88.2 dBA, and LC peak sound level of 98.6 dBA. Post-exposure testing was conducted immediately at the conclusion of the VR concert, and again within 24 hr to 1 week after the exposure. Participants also answered a questionnaire that estimated noise exposure history (National Acoustics Laboratory \"Noise Calculator\").</p><p><strong>Results: </strong>Post-exposure deficit was not observed in DPOAEs or Words-in-Noise score ( p' s > 0.05). However, statistically significant temporary post-exposure deficit was observed in audiometry at 4, 8, and 12.5 kHz ( p 's < 0.05) (mean differences: 2 to 3 dB HL). Twenty-four hours and 1-week post-exposure measurements revealed no permanent changes from baseline measurements ( p 's > 0.05) aside from one spurious difference at 12.5 kHz. Males tended to exhibit a significantly higher noise history score on average than females. The primary, secondary, and tertiary sources of noise hazard history in this young-adult cohort included amplified music.</p><p><strong>Conclusions: </strong>These preliminary data suggest that VR music concerts-which are likely to produce a substantially lower noise dose than in-person music concerts-may still be capable of producing at least slight, temporary threshold shifts on the order of 2 to 3 dB HL. Future research should include VR headsets in personal music player risk assessment, as the VR music concert platform is increasing rapidly in popularity among young-adults.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"382-392"},"PeriodicalIF":2.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11825494/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142481384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-03-01Epub Date: 2024-11-07DOI: 10.1097/AUD.0000000000001601
Dina Lelic, Erin Picou, Valeriy Shafiro, Christian Lorenzi
{"title":"Sounds of Nature and Hearing Loss: A Call to Action.","authors":"Dina Lelic, Erin Picou, Valeriy Shafiro, Christian Lorenzi","doi":"10.1097/AUD.0000000000001601","DOIUrl":"10.1097/AUD.0000000000001601","url":null,"abstract":"<p><p>The ability to monitor surrounding natural sounds and scenes is important for performing many activities in daily life and for overall well-being. Yet, unlike speech, perception of natural sounds and scenes is relatively understudied in relation to hearing loss, despite the documented restorative health effects. We present data from first-time hearing aid users describing \"rediscovered\" natural sounds they could now perceive with clarity. These data suggest that hearing loss not only diminishes recognition of natural sounds, but also limits people's awareness of the richness of their environment, thus limiting their connection to it. Little is presently known about the extent hearing aids can restore the perception of abundance, clarity, or intensity of natural sounds. Our call to action outlines specific steps to improve the experience of natural sounds and scenes for people with hearing loss-an overlooked aspect of their quality of life.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"298-304"},"PeriodicalIF":2.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11825482/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142592402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-03-01Epub Date: 2024-10-07DOI: 10.1097/AUD.0000000000001595
Pamela P Lunardelo, Marisa T H Fukuda, Sthella Zanchetta
{"title":"Age-Related Listening Performance Changes Across Adulthood.","authors":"Pamela P Lunardelo, Marisa T H Fukuda, Sthella Zanchetta","doi":"10.1097/AUD.0000000000001595","DOIUrl":"10.1097/AUD.0000000000001595","url":null,"abstract":"<p><strong>Objectives: </strong>This study compares auditory processing performance across different decades of adulthood, including young adults and middle-aged individuals with normal hearing and no spontaneous auditory complaints.</p><p><strong>Design: </strong>We assessed 80 participants with normal hearing, at least 10 years of education, and normal global cognition. The participants completed various auditory tests, including speech-in-noise, dichotic digits, duration, pitch pattern sequence, gap in noise, and masking level difference. In addition, we conducted working memory assessments and administered a questionnaire on self-perceived hearing difficulties.</p><p><strong>Results: </strong>Our findings revealed significant differences in auditory test performance across different age groups, except for the masking level difference. The youngest group outperformed all other age groups in the speech-in-noise test, while differences in dichotic listening and temporal resolution emerged from the age of 40 and in temporal ordering from the age of 50. Moreover, higher education levels and better working memory test scores were associated with better auditory performance as individuals aged. However, the influence of these factors varied across different auditory tests. It is interesting that we observed increased self-reported hearing difficulties with age, even in participants without spontaneous auditory complaints.</p><p><strong>Conclusions: </strong>Our study highlights significant variations in auditory test performance, with noticeable changes occurring from age 30 and becoming more pronounced from age 40 onward. As individuals grow older, they tend to perceive more hearing difficulties. Furthermore, the impact of age on auditory processing performance is influenced by factors such as education and working memory.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"408-420"},"PeriodicalIF":2.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-03-01Epub Date: 2024-11-06DOI: 10.1097/AUD.0000000000001605
Varsha Rallapalli, Richard Freyman, Pamela Souza
{"title":"Relationship Between Working Memory, Compression, and Beamformers in Ideal Conditions.","authors":"Varsha Rallapalli, Richard Freyman, Pamela Souza","doi":"10.1097/AUD.0000000000001605","DOIUrl":"10.1097/AUD.0000000000001605","url":null,"abstract":"<p><strong>Objectives: </strong>Previous research has shown that speech recognition with different wide dynamic range compression (WDRC) time-constants (fast-acting or Fast and slow-acting or Slow) is associated with individual working memory ability, especially in adverse listening conditions. Until recently, much of this research has been limited to omnidirectional hearing aid settings and colocated speech and noise, whereas most hearing aids are fit with directional processing that may improve the listening environment in spatially separated conditions and interact with WDRC processing. The primary objective of this study was to determine whether there is an association between individual working memory ability and speech recognition in noise with different WDRC time-constants, with and without microphone directionality (binaural beamformer or Beam versus omnidirectional or Omni) in a spatial condition ideal for the beamformer (speech at 0 , noise at 180 ). The hypothesis was that the relationship between speech recognition ability and different WDRC time-constants would depend on working memory in the Omni mode, whereas the relationship would diminish in the Beam mode. The study also examined whether this relationship is different from the effects of working memory on speech recognition with WDRC time-constants previously studied in colocated conditions.</p><p><strong>Design: </strong>Twenty-one listeners with bilateral mild to moderately severe sensorineural hearing loss repeated low-context sentences mixed with four-talker babble, presented across 0 to 10 dB signal to noise ratio (SNR) in colocated (0 ) and spatially separated (180 ) conditions. A wearable hearing aid customized to the listener's hearing level was used to present four signal processing combinations which combined microphone mode (Beam or Omni) and WDRC time-constants (Fast or Slow). Individual working memory ability was measured using the reading span test. A signal distortion metric was used to quantify cumulative temporal envelope distortion from background noise and the hearing aid processing for each listener. In a secondary analysis, the role of working memory in the relationship between cumulative signal distortion and speech recognition was examined in the spatially separated condition.</p><p><strong>Results: </strong>Signal distortion was greater with Fast WDRC compared with Slow WDRC, regardless of the microphone mode or spatial condition. As expected, Beam reduced signal distortion and improved speech recognition over Omni, especially at poorer SNRs. Contrary to the hypothesis, speech recognition with different WDRC time-constants did not depend on working memory in Beam or Omni (in the spatially separated condition). However, there was a significant interaction between working memory and cumulative signal distortion, such that speech recognition increased at a faster rate with lower distortion for an individual with better working memory. In Omni, the effect of w","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"523-536"},"PeriodicalIF":2.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}