Ear and HearingPub Date : 2024-12-26DOI: 10.1097/AUD.0000000000001621
Chelsea M Blankenship, Lindsey M Hickson, Tera Quigley, Erik Larsen, Li Lin, Lisa L Hunter
{"title":"Extended High-Frequency Audiometry Using the Wireless Automated Hearing Test System Compared to Manual Audiometry in Children and Adolescents.","authors":"Chelsea M Blankenship, Lindsey M Hickson, Tera Quigley, Erik Larsen, Li Lin, Lisa L Hunter","doi":"10.1097/AUD.0000000000001621","DOIUrl":"10.1097/AUD.0000000000001621","url":null,"abstract":"<p><strong>Objectives: </strong>Valid wireless automated Békésy-like audiometry (ABA) outside a sound booth that includes extended high frequencies (EHF) would increase access to monitoring programs for individuals at risk for hearing loss, particularly those at risk for ototoxicity. The purpose of the study was to compare thresholds obtained with (1) manual audiometry using an Interacoustics Equinox and modified Hughson-Westlake 5 dB threshold technique to automated audiometry using the Wireless Automated Hearing Test System (WAHTS) and a Békésy-like 2 dB threshold technique inside a sound booth, and (2) ABA measured in the sound booth to ABA measured outside the sound booth.</p><p><strong>Design: </strong>Cross-sectional study including 28 typically developing children and adolescents (mean = 14.5 years; range = 10 to 18 years). Audiometric thresholds were measured from 0.25 to 16 kHz with manual audiometry inside the sound booth and with ABA measured both inside and outside the sound booth in counterbalanced order.</p><p><strong>Results: </strong>ABA thresholds measured inside the sound booth were overall about 5 dB better compared with manual thresholds in the conventional frequencies (0.25 to 8 kHz). In the EHFs (10 to 16 kHz), a larger threshold difference was observed, where ABA thresholds were overall about 14 dB better compared with manual thresholds. The majority of ABA thresholds measured outside the sound booth were within ±10 dB of ABA thresholds measured inside the sound booth (conventional: 86%; EHF: 80%). However, only 69% of ABA thresholds measured inside the sound booth were within ±10 dB of manual thresholds in the conventional frequencies and only 32% of ABA thresholds measured inside the sound booth were within ±10 dB of manual thresholds in the EHFs.</p><p><strong>Conclusions: </strong>These results indicate that WAHTS ABA results in better thresholds in conventional frequencies than manual audiometry in children and adolescents, consistent with previous studies in adults. Hearing thresholds for the EHF were better when measured with WAHTS ABA compared with manual audiometry, likely due to different transducer-related calibration values that are not age-adjusted. Additional studies of WAHTS automated Békésy-like EHF thresholds that include healthy pediatric participants are needed to establish age-appropriate normative thresholds for clinical application in monitoring programs for noise-induced hearing loss and/or ototoxicity.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2024-12-23DOI: 10.1097/AUD.0000000000001617
Genoveva Hurtado, Elizabeth A Poth, Neil P Monaghan, Shaun A Nguyen, Habib G Rizk
{"title":"Isolated Corrective Saccades in the Bilateral Posterior Canal Stimulation During the Video Head Impulse Test: A Marker of Central Vestibulopathy?","authors":"Genoveva Hurtado, Elizabeth A Poth, Neil P Monaghan, Shaun A Nguyen, Habib G Rizk","doi":"10.1097/AUD.0000000000001617","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001617","url":null,"abstract":"<p><strong>Objectives: </strong>This study aimed to determine if the presence of corrective saccades during video head impulse test (vHIT) stimulation of the bilateral posterior semicircular canals (PSCs) correlated with other vestibular test results, demographics, symptoms, or diagnoses.</p><p><strong>Design: </strong>This study was a retrospective chart review where 1006 subjects' vHIT records were screened with 17 subjects meeting inclusion criteria for isolated bilateral PSC saccades.</p><p><strong>Results: </strong>Of the 1006 patients undergoing vHIT testing, only 1.7% had isolated bilateral PSC saccades. The median age of subjects was 73 years, with a range of 61 to 85 years. Statistical significance was identified between groups with abnormal PSC vHIT gain and abnormal ocular vestibular evoked myogenic potential results as well as those with 1 to 2 diagnoses.</p><p><strong>Conclusions: </strong>Our study confirms the rarity of isolated bilateral PSC vHIT saccades and as well as association with central vestibulopathy. Correlations with other vestibular test results, demographics, symptoms, or diagnoses may be strengthened with future large-scale studies. Further understanding of the clinical utility of isolated bilateral PSC vHIT saccades is needed. Patients with bilateral PSC vHIT abnormalities may benefit from a comprehensive neurological evaluation and consultation.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142878569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2024-12-19DOI: 10.1097/AUD.0000000000001619
Christopher Slugocki, Francis Kuk, Petri Korhonen
{"title":"Using the Mismatch Negativity to Evaluate Hearing Aid Directional Enhancement Based on Multistream Architecture.","authors":"Christopher Slugocki, Francis Kuk, Petri Korhonen","doi":"10.1097/AUD.0000000000001619","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001619","url":null,"abstract":"<p><strong>Objectives: </strong>To evaluate whether hearing aid directivity based on multistream architecture (MSA) might enhance the mismatch negativity (MMN) evoked by phonemic contrasts in noise.</p><p><strong>Design: </strong>Single-blind within-subjects design. Fifteen older adults (mean age = 72.7 years, range = 40 to 88 years, 8 females) with a moderate-to-severe degree of sensorineural hearing loss participated. Participants first performed an adaptive two-alternative forced choice phonemic discrimination task to determine the speech level-that is, signal to noise ratio (SNR)-required to reliably discriminate between two monosyllabic stimuli (/ba/ and /da/) presented in the presence of ongoing fixed-level background noise. Participants were then presented with a phonemic oddball sequence alternating on each trial between two loudspeakers located in the front at 0° and -30° azimuth. This sequence presented the same monosyllabic stimuli in the same background noise at individualized SNRs determined by the phonemic discrimination task. The MMN was measured as participants passively listened to the oddball sequence in two hearing aid conditions: MSA-ON and MSA-OFF.</p><p><strong>Results: </strong>The magnitude of the MMN component was significantly enhanced when evoked in MSA-ON relative to MSA-OFF conditions. Unexpectedly, MMN magnitudes were also positively related to degrees of hearing loss. Neither MSA nor the participant's hearing loss was found to independently affect MMN latency. However, MMN latency was significantly affected by the interaction of hearing aid condition and individualized SNRs, where a negative relationship between individualized SNR and MMN latency was observed only in the MSA-OFF condition.</p><p><strong>Conclusions: </strong>Hearing aid directivity based on the MSA approach was found to improve preattentive detection of phonemic contrasts in a simulated multi-talker situation as indexed by larger MMN component magnitudes. The MMN may generally be useful for exploring the underlying nature of speech-in-noise benefits conferred by some hearing aid features.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142856682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2024-12-19DOI: 10.1097/AUD.0000000000001612
Erin M Picou, Hilary Davis, Kathleen Healy Lunsford, Anne Marie Tharpe
{"title":"Validation of the Vanderbilt Classroom Listening Assessment Short Survey for Children With Unilateral Hearing Loss.","authors":"Erin M Picou, Hilary Davis, Kathleen Healy Lunsford, Anne Marie Tharpe","doi":"10.1097/AUD.0000000000001612","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001612","url":null,"abstract":"<p><strong>Objectives: </strong>Children with unilateral hearing loss experience difficulties in classroom listening situations. There are a limited number of validated questionnaires available for monitoring listening development and quantifying the challenges school-aged children with unilateral hearing loss experience. The purpose of this study was to evaluate a survey that describes the classroom listening challenges reported by children with unilateral hearing loss with and without the use of personal hearing devices (air conduction hearing aid, bone conduction hearing aid, cochlear implant, contralateral routing of signals system).</p><p><strong>Design: </strong>Children aged 9 to 17 years with self-reported unilateral hearing loss completed an online survey about classroom listening difficulties when not using a personal hearing device (n = 1148) or with the use of a personal hearing device (n = 897). The survey includes 15 questions examining different situations common in modern classrooms. Each question includes a picture depicting the described listening situation. Exploratory factor analysis was used to develop subscales and the internal reliability of the subscales was evaluated. To validate the survey, the relationships between survey scores and self-reported hearing difficulties (without a personal hearing device) or type of device (with a personal hearing device) were evaluated using regression analyses.</p><p><strong>Results: </strong>Factor analysis revealed survey scores for individual items statistically loaded onto three factors. On the basis of these factors, subscales were created, which are related to: (1) listening situations where the talker is faraway from the child, (2) listening situations where the talker is close to the child and they are inside a building, and (3) listening situations where the talker is close to the child and they are outside a building. Regression analyses revealed children reported the greatest difficulty in school settings when the sound of interest was faraway from them. Although scores were generally higher, indicating listening was easier, when children were wearing their personal hearing devices (i.e., air conduction hearing aid, bone conduction hearing aid, contralateral routing of signals system, cochlear implant), situations with faraway signals were still reported as more challenging than were situations where signals were close.</p><p><strong>Conclusions: </strong>This set of findings highlights the need to incorporate distance effects into laboratory evaluations that include children with unilateral hearing loss. In addition, the findings support clinical interventions that address talker-to-listener distances, such as preferential seating and remote microphone systems. Last, the results of this study validate the Vanderbilt Classroom Listening Assessment Short Survey for use with children aged 9 to 17 years of age with unilateral self-reported hearing difficulty. The subscales are em","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142856683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Catch-Up Saccades in Vestibulo-Ocular Reflex Deficit: Contribution of Visual Information?","authors":"Ruben Hermann, Stefano Ramat, Silvia Colnaghi, Vincent Lagadec, Clément Desoche, Denis Pelisson, Caroline Froment Tilikete","doi":"10.1097/AUD.0000000000001616","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001616","url":null,"abstract":"<p><strong>Objectives: </strong>Catch-up saccades help to compensate for loss of gaze stabilization during rapid head rotation in case of vestibular deficit. While overt saccades observed after head rotation are obviously visually guided, some of these catch-up saccades occur with shorter latency while the head is still moving, anticipating the needed final eye position. These covert saccades seem to be generated based on the integration of multisensory inputs. Vision could be one of these inputs, but the known delay for triggering visually guided saccades questions this possibility. The main objective of this study is to evaluate the potential role of visual information for controlling (triggering and guiding) the first catch-up saccades in patients suffering from bilateral vestibulopathy. To investigate this, we used head impulse test in a virtual reality setting allowing to create different visuo-vestibular mismatch conditions.</p><p><strong>Design: </strong>Twelve patients with bilateral vestibulopathy were recruited. We assessed in our patient group the validity of our virtual reality head impulse testing approach by comparing recorded eye and head movement to classical video head impulse test. Then, using the virtual reality system, we tested head impulse test under both normal and three visuo-vestibular mismatch conditions. In these mismatch conditions, the movement of the visual scene relative to the head movement was altered: decreased in amplitude by 50% (half), nullified (freeze), or inverted in direction (inverse). Recorded eye and head movements during these different conditions were then analyzed, more specifically the characteristics of the first catch-up saccade.</p><p><strong>Results: </strong>Impaired vestibulo-ocular reflex required subjects to systematically perform catch-up saccades, which could be covert or overt. The latency of the first catch-up saccade increased along with the amount of visuo-vestibular mismatch between the four conditions (i.e., from normal to half to freeze to inverse) and, consequently, the mean percentage of covert saccades decreased with increasing visual feedback error. However, the freeze and inverse conditions allowed us to reveal the existence of many saccades performed in the wrong direction relative to visual feedback. These visually discordant saccades were present in over half of the trials, they were mainly covert and their percentage was inversely correlated with residual vestibulo-ocular reflex gain.</p><p><strong>Conclusions: </strong>Visual information significantly impacts catch-up saccade latency and the relative number of covert saccades during head impulse testing in vestibular deficit. However, in more than 50% of trials involving a visuo-vestibular mismatch, catch-up saccades remained directed in the compensatory direction relative to head movement, that is, they were visually discordant. Therefore, contrary to previously published proposals, visual information does not appear to b","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142848518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2024-12-16DOI: 10.1097/AUD.0000000000001610
Jeena Mary Joy, Lakshmi Venkatesh, Samuel N Mathew, Swapna Narayanan, Sita Sreekumar
{"title":"Speech Perception and Language Abilities Among Children Using Cochlear Implants: Findings From a Primary School Age Cohort in South India.","authors":"Jeena Mary Joy, Lakshmi Venkatesh, Samuel N Mathew, Swapna Narayanan, Sita Sreekumar","doi":"10.1097/AUD.0000000000001610","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001610","url":null,"abstract":"<p><strong>Objectives: </strong>This study aimed to profile the speech perception and language abilities of a cohort of pediatric cochlear implant (CI) users in primary school years. It also aimed to understand the intercorrelations among audiological, child, and environmental characteristics, speech perception, and language skills and to explore the predictors of speech perception and language skills.</p><p><strong>Design: </strong>A cross-sectional design was used for the study. The participants were 222 pediatric CI users (106 boys; 116 girls) with a mean chronological age of 10.51 (SD ± 1.28) years. Participants had received CIs at a mean age of 2.93 (SD ± 0.95) years, with the mean duration of CI use being 7.43 (SD ± 1.15) years at the time of assessment. Participants completed an assessment battery comprising speech perception (phoneme discrimination, open-set speech perception in quiet) and language (semantics, syntax) tasks. Selected audiological, child, and environmental characteristics were documented. The mean and SDs of the measures across age categories (8 to 12 years) and the proportion of children attaining scores better than 80%, between 50 and 80%, and poorer than 50% of the total possible score in each task were computed to generate a profile of speech perception and language abilities. Correlational and regression analyses assessed the intercorrelations among the variables and predictors of speech perception and language abilities.</p><p><strong>Results: </strong>A large proportion (79.0%) of children in the study group obtained scores better than 80% for phoneme discrimination, whereas only 17.8% scored better than 80% for open-set speech perception in quiet. Additionally, 42.8 and 20.8% of children scored better than 80% for semantics and syntax, respectively. Speech perception and language abilities demonstrated moderate-strong intercorrelations, contributing to a significant proportion of the total variance explained in phoneme discrimination (42.9%), open-set speech perception (61.8%), semantics (63.0%), and syntax (60.8%). Phoneme discrimination and open-set speech perception emerged as large contributors to variance in overall language abilities. Among the audiological factors, only hearing age contributed to a small proportion of variance (3 to 6%) across children's speech perception and language performance.</p><p><strong>Conclusions: </strong>Children using CI demonstrated highly variable performance in speech perception and expressive language skills during primary school. Although children demonstrated improved performance in phoneme discrimination and semantics, they continued to face challenges in the (quiet) speech perception and syntax abilities. The effect of audiological, child, and environmental factors was minimal in explaining the variance in speech perception and language abilities, which shared a bidirectional relationship. The findings relating to mid-term outcomes, ranging from 4 to 9 years after cochle","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142831099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2024-12-11DOI: 10.1097/AUD.0000000000001607
Cailey A Salagovic, Ryan A Stevenson, Blake E Butler
{"title":"Behavioral Response Modeling to Resolve Listener- and Stimulus-Related Influences on Audiovisual Speech Integration in Cochlear Implant Users.","authors":"Cailey A Salagovic, Ryan A Stevenson, Blake E Butler","doi":"10.1097/AUD.0000000000001607","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001607","url":null,"abstract":"<p><strong>Objectives: </strong>Speech intelligibility is supported by the sound of a talker's voice and visual cues related to articulatory movements. The relative contribution of auditory and visual cues to an integrated audiovisual percept varies depending on a listener's environment and sensory acuity. Cochlear implant users rely more on visual cues than those with acoustic hearing to help compensate for the fact that the auditory signal produced by their implant is poorly resolved relative to that of the typically developed cochlea. The relative weight placed on auditory and visual speech cues can be measured by presenting discordant cues across the two modalities and assessing the resulting percept (the McGurk effect). The current literature is mixed with regards to how cochlear implant users respond to McGurk stimuli; some studies suggest they report hearing syllables that represent a fusion of the auditory and visual cues more frequently than typical hearing controls while others report less frequent fusion. However, several of these studies compared implant users to younger control samples despite evidence that the likelihood and strength of audiovisual integration increase with age. Thus, the present study sought to clarify the impacts of hearing status and age on multisensory speech integration using a combination of behavioral analyses and response modeling.</p><p><strong>Design: </strong>Cochlear implant users (mean age = 58.9 years), age-matched controls (mean age = 61.5 years), and younger controls (mean age = 25.9 years) completed an online audiovisual speech task. Participants were shown and/or heard four different talkers producing syllables in auditory-alone, visual-alone, and incongruent audiovisual conditions. After each trial, participants reported the syllable they heard or saw from a list of four possible options.</p><p><strong>Results: </strong>The younger and older control groups performed similarly in both unisensory conditions. The cochlear implant users performed significantly better than either control group in the visual-alone condition. When responding to the incongruent audiovisual trials, cochlear implant users and age-matched controls experienced significantly more fusion than younger controls. When fusion was not experienced, younger controls were more likely to report the auditorily presented syllable than either implant users or age-matched controls. Conversely, implant users were more likely to report the visually presented syllable than either age-matched controls or younger controls. Modeling of the relationship between stimuli and behavioral responses revealed that younger controls had lower disparity thresholds (i.e., were less likely to experience a fused audiovisual percept) than either the implant users or older controls, while implant users had higher levels of sensory noise (i.e., more variability in the way a given stimulus pair is perceived across multiple presentations) than age-matched controls.","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2024-12-10DOI: 10.1097/AUD.0000000000001606
Laura E Hahn, Anke Hirschfelder, Dirk Mürbe, Claudia Männel
{"title":"How Do Enriched Speech Acoustics Support Language Acquisition in Children With Hearing Loss? A Narrative Review.","authors":"Laura E Hahn, Anke Hirschfelder, Dirk Mürbe, Claudia Männel","doi":"10.1097/AUD.0000000000001606","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001606","url":null,"abstract":"<p><p>Language outcomes of children with hearing loss remain heterogeneous despite recent advances in treatment and intervention. Consonants with high frequency, in particular, continue to pose challenges to affected children's speech perception and production. In this review, the authors evaluate findings of how enriched child-directed speech and song might function as a form of early family-centered intervention to remedy the effects of hearing loss on consonant acquisition already during infancy. First, they review the developmental trajectory of consonant acquisition and how it is impeded by permanent pediatric hearing loss. Second, they assess how phonetic-prosodic and lexico-structural features of caregiver speech and song could facilitate acquisition of consonants in the high-frequency range. Last, recommendations for clinical routines and further research are expressed.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Music Perception in Older Adults With Hearing Loss: Protective Effect of Musical Experience.","authors":"Alexis Whittom, Loonan Chauvette, Alex Bégin, Isabelle Blanchette, Pascale Tremblay, Andréanne Sharp","doi":"10.1097/AUD.0000000000001615","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001615","url":null,"abstract":"<p><strong>Objectives: </strong>The goal of this project was to investigate the impact of musical experience, hearing loss, and age on music perception in older adults. The authors hypothesized that older adults with a varying degree of musical experience would perform better at music perception tasks than their counterparts without musical experience while controlling for age and hearing loss.</p><p><strong>Design: </strong>This study used a descriptive correlational cross-sectional design. Seventy-seven older adults aged 60 to 90 years were recruited and divided into two groups based on their lifetime musical experience: the group without musical experience (n = 39) and the M group (with musical experience; n = 38). Participants in the M group had either played an instrument for 5 years or more and/or taken at least 1 year of music lessons. Following a hearing screening and a musical experience questionnaire, participants completed two music perception tasks: (1) a short version of the Montreal Battery Evaluation of Amusia (MBEA) measuring melodic (scale and contour) and rhythm perception, and (2) an instrument discrimination task measuring timbre perception.</p><p><strong>Results: </strong>Results revealed that participants of the M group had a significantly higher accuracy in both tasks compared with the group without musical experience while controlling for age and hearing loss. Moreover, a significant interaction was found between group effect and hearing loss for the MBEA, suggesting that musical experience moderates the impact of hearing loss on melodic and rhythm perception abilities. Finally, the amount of musical experience was the most important positive predictor for MBEA accuracy in the M group.</p><p><strong>Conclusions: </strong>These results suggest that despite age-related hearing loss, older adults with musical experience still benefit from their experience-driven enhancement in melodic, rhythm, and timbre perception. Findings from this study support the notion that music training is beneficial for music perception abilities, providing protection against the impact of presbycusis.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142796599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2024-12-02DOI: 10.1097/AUD.0000000000001608
Grace Szatkowski, Pamela Elizabeth Souza
{"title":"Evaluation of Communication Outcomes With Over-the-Counter Hearing Aids.","authors":"Grace Szatkowski, Pamela Elizabeth Souza","doi":"10.1097/AUD.0000000000001608","DOIUrl":"https://doi.org/10.1097/AUD.0000000000001608","url":null,"abstract":"<p><strong>Objectives: </strong>Over-the-counter (OTC) hearing aids are a treatment option for adults with mild-to-moderate hearing loss. Previous investigations demonstrated the benefits of OTC hearing aids, primarily self-fit OTCs (i.e., self-adjustable with a smartphone or tablet), on self-reported hearing aid benefit and speech recognition using standardized measures. However, less is known regarding whether OTC hearing aids effectively improve aspects of everyday communication, particularly with preprogrammed OTCs (i.e., OTCs with manufacturer-defined programs). The goal of this study was to evaluate the benefits of preprogrammed OTC hearing aids for two important aspects of communication: (1) conversation efficiency, or the time taken during conversations with a familiar communication partner (e.g., one's spouse) and (2) auditory recall following speech recognition, a critical aspect of participation during conversations.</p><p><strong>Design: </strong>This study used a within-subject design with thirty adults with mild-to-moderate hearing loss and their familiar communication partners. Participants were fitted with preprogrammed OTC hearing aids using the default program with the best match to target for each listener. The primary outcome measures were conversation efficiency and auditory recall. Speech recognition-in-noise served as a secondary measure. Conversation efficiency was evaluated using the DiapixUK task, a \"spot-the-difference\" conversation task in quiet, and measured as the sum of time taken to correctly identify differences between two similar pictures. Within-subject comparisons were made for hearing aid condition (without and with OTC hearing aids in the default setting). Auditory recall was assessed with the Repeat and Recall Test following speech recognition-in-noise with low- and high-context sentence presentations at 5- and 10-dB signal to noise ratios. In addition to the mentioned hearing aid conditions, an additional comparison was made with the OTC hearing aid noise-reduction program. Linear mixed-effects models were used to evaluate the effect of OTC hearing aid use on primary measures of efficiency and recall. Friedman signed-rank test was used to evaluate speech recognition scores.</p><p><strong>Results: </strong>We did not find a significant improvement in conversation efficiency with OTC hearing aid use compared with the unaided condition. For auditory recall, we observed the poorest median recall scores with the default program and the best median scores with the noise-reduction program, although neither observation was statistically significant. Sentence recognition scores were near ceiling in the unaided condition and were poorest with use of the OTC hearing aids in the default program across most signal to noise ratio and context test conditions. Our findings did not show improvements in communication outcomes with OTC hearing aid use. Small to medium effect sizes for our data may be indicative of the limita","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}