Ear and HearingPub Date : 2025-07-01Epub Date: 2025-01-24DOI: 10.1097/AUD.0000000000001630
Matthew G Wisniewski, C Shane Chuwonganant
{"title":"Wearing Hearing Protection Makes Me Worse at My Job: Impacts of Hearing Protection Use on Sensorimotor Tracking Performance.","authors":"Matthew G Wisniewski, C Shane Chuwonganant","doi":"10.1097/AUD.0000000000001630","DOIUrl":"10.1097/AUD.0000000000001630","url":null,"abstract":"<p><strong>Objectives: </strong>Occupational hearing loss is a significant problem worldwide despite the fact that it can be mitigated by the wearing of hearing protection devices (HPDs). When surveyed, workers frequently report that worsened work performance while wearing HPDs is one reason why they choose not to wear them. However, there have been few studies to supplement these subjective reports with objective measures. Where they do exist, assessed performance measures have mostly characterized auditory situational awareness in gross terms (e.g., average speech comprehension scores over an entire session). The temporal dynamics of performance and HPD impacts on nonauditory aspects of work performance are largely unknown. In the present study, we aimed to fill this gap in the literature by measuring how HPD usage impacted sensorimotor tracking performance in relation to ongoing auditory events.</p><p><strong>Design: </strong>In two experiments, listeners heard commands sourced from the coordinate response measure (CRM) corpus (i.e., sentences of the form \"Ready <call sign> go to <color> <number> now\"). These commands informed listeners of which of nine moving on-screen objects to track with a computer mouse (e.g., \"blue four\" refers the listener to a blue square). The commands were presented in background street noise and were heard under either \"No HPD\" or \"HPD\" conditions. In experiment 1, HPD wearing was simulated with a digital filter designed to mimic the attenuation profile of a passive HPD. In experiment 2, actual HPDs were worn by listeners. Continuous recording of tracking error allowed us to simultaneously examine how HPD wearing impacted speech comprehension, the accuracy of tracking, and how tracking accuracy varied as a function of time on task and ongoing auditory events (e.g., the presentation of a critical CRM sentence).</p><p><strong>Results: </strong>In both experiments, listeners spent less time tracking the correct object in the HPD condition. After trimming data to those time points in which the target object was known, worse performance was exhibited by the HPD condition than the No HPD condition. In the examination of the temporal dynamics of tracking error, it was apparent that differences arose strongly during the presentation of CRM sentences.</p><p><strong>Conclusions: </strong>Workers' complaints of poorer performance while wearing HPDs are justified and extend beyond just diminished auditory situational awareness. The negative impact on nonauditory aspects of work performance may be strongest around critical listening periods. Addressing these aspects of performance will be an important part of addressing HPD nonuse in occupational settings.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"871-879"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143029625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-02-11DOI: 10.1097/AUD.0000000000001642
Arthur Boothroyd, Dhiman Sengupta, Shaelyn Painter, Elena Shur, Harinath Garudadri, Carol Mackersie
{"title":"Self-Fitting Hearing Aids: Effects of Starting Response and Field Experience.","authors":"Arthur Boothroyd, Dhiman Sengupta, Shaelyn Painter, Elena Shur, Harinath Garudadri, Carol Mackersie","doi":"10.1097/AUD.0000000000001642","DOIUrl":"10.1097/AUD.0000000000001642","url":null,"abstract":"<p><strong>Objectives: </strong>To determine the effects of changing from a prescribed to a generic starting response on self-fitting outcome and behavior before and after a brief field experience.</p><p><strong>Method: </strong>Twenty adult hearing-aid users with mild-to-moderate hearing loss used a smartphone interface to adjust level and spectral tilt of the output of a wearable master hearing aid while listening to prerecorded speech, presented at 65 dB SPL, in quiet. A prescribed starting response was based on the participant's own audiogram. A generic starting response was based on an audiogram for a typical mild-to-moderate hearing loss and was the same for all participants. Initial self-fittings from the two starting responses took place in the lab. After a brief field experience, involving conversation, self-hearing, and ambient noise, with readjustment as needed, self-fittings from the two starting responses were repeated in the lab. Starting responses, self-fitted responses, and adjustment steps were logged in the master hearing aid for subsequent evaluation of real-ear output spectra and for assessment of self-fitting behavior.</p><p><strong>Results: </strong>Neither starting response nor field experience had a significant effect on mean self-fitted output in the lab ( p = 0.506 and 0.149, respectively). However, the SD of individual starting-response effects on high-frequency self-fitted output fell by around 50% after the field experience ( p = 0.006). The effect of starting response on self-fitting behavior was limited to number of adjustment steps, which was higher for the generic start ( p = 0.014). The effect of field experience on self-fitting behavior was limited to a 50% reduction in self-fitting time ( p < 0.001). This reduction was attributable mainly to less time spent listening after each adjustment step ( p = 0.019).</p><p><strong>Conclusions: </strong>The findings support the conclusion that, for a population with mild-to-moderate hearing loss, a generic starting response can be a viable option for over-the-counter self-fitting hearing aids. They highlight, however, the need for practice and experience with novel self-fitting hearing aids and the fact that self-fitting may not be suitable for all.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1019-1028"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143392580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-03-05DOI: 10.1097/AUD.0000000000001649
Louise Van Goylen, Katrien Kestens, Hannah Keppler
{"title":"The Auditory-Cognitive Assessment of Speech Understanding: A Comprehensive Analysis of Construct Validity.","authors":"Louise Van Goylen, Katrien Kestens, Hannah Keppler","doi":"10.1097/AUD.0000000000001649","DOIUrl":"10.1097/AUD.0000000000001649","url":null,"abstract":"<p><strong>Objectives: </strong>Age-related hearing loss, the predominant global cause of hearing loss in middle-aged and older adults, presents a significant health and social problem, particularly affecting speech understanding. Beyond the auditory system, cognitive functions play a crucial role in speech understanding, especially in noisy environments. Although visual cognitive testing is commonly used as an intriguing alternative to mitigate the potential adverse effects of hearing loss on the perception of auditory test items, its efficacy within a hearing-related context is questionable due to construct differences. Therefore, this study aims to investigate the construct validity of auditory and visual versions of cognitive tests in predicting speech understanding, to identify the best suitable auditory or visual cognitive predictor(s) for implementation in the field of audiology.</p><p><strong>Design: </strong>Fifty-two middle-aged and older adults with normal hearing and 52 with hearing loss were included in the study (mean age for the total group: 67.38 years [SD: 7.71 years], range: 45 to 80 years). Both subgroups were matched based on age, sex, and educational level. Speech understanding in quiet (SPIQ) and in noise (SPIN) was assessed using the ecologically valid Dutch Linguistically Controlled Sentences test. An extensive cognitive test battery was assembled, encompassing measures of sustained attention, working memory, processing speed, and cognitive flexibility and inhibition, through both auditory and visual assessments. Correlation coefficients examined the relationship between the independent variables (demographics and cognition), and SPIQ and SPIN separately. Identified predictors underwent stepwise and hierarchical multiple regression analyses, with significant variables included in final multiple regression models for SPIQ and SPIN separately.</p><p><strong>Results: </strong>The final multiple regression models demonstrated statistically significant predictions for SPIQ (adjusted R2 = 0.699) and SPIN (adjusted R2 = 0.776). Audiometric hearing status and auditory working memory significantly contributed to predicting SPIQ, while age, educational level, audiometric hearing status, auditory sustained attention, and auditory working memory played significant roles in predicting SPIN.</p><p><strong>Conclusions: </strong>This study underscores the necessity of exploring construct validity of cognitive tests within audiological research. The findings advocate for the superiority of auditory cognitive tests over visual testing in relation to speech understanding.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1044-1055"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143574722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-03-21DOI: 10.1097/AUD.0000000000001651
Amanda Saksida, Sašo Živanović, Saba Battelino, Eva Orzan
{"title":"Let's See If You Can Hear: The Effect of Stimulus Type and Intensity to Pupil Diameter Response in Infants and Adults.","authors":"Amanda Saksida, Sašo Živanović, Saba Battelino, Eva Orzan","doi":"10.1097/AUD.0000000000001651","DOIUrl":"10.1097/AUD.0000000000001651","url":null,"abstract":"<p><strong>Objectives: </strong>Pupil dilation can serve as a measure of auditory attention. It has been proposed as an objective measure for adjusting hearing aid configurations, and as a measure of hearing threshold in the pediatric population. Here we explore (1) whether the pupillary dilation response (PDR) to audible sounds can be reliably measured in normally hearing infants within their average attention span, and in normally hearing adults, (2) how accurate within-participant models are in classifying PDR based on the stimulus type at various intensity levels, (3) whether the amount of analyzed data affects the model reliability, and (4) whether we can observe systematic differences in the PDR between speech and nonspeech sounds, and between the discrimination and detection paradigms.</p><p><strong>Design: </strong>In experiment 1, we measured the PDR to target warble tones at 500 to 4000 Hz compared with a standard tone (250 Hz) using an oddball discrimination test. A group of normally hearing infants was tested in experiment 1a (n = 36, mean [ME] = 21 months), and a group of young adults in experiment 1b (n = 12, ME = 29 years). The test was divided into five intensity blocks (30 to 70 dB SPL). In experiment 2a (n = 11, ME = 24 years), the task from experiment 1 was transformed into a detection task by removing the standard warble tone, and in experiment 2b (n = 12, ME = 29 years), participants listened to linguistic (Ling-6) sounds instead of tones.</p><p><strong>Results: </strong>In all experiments, the increased PDR was significantly associated with target sound stimuli on a group level. Although we found no overall effect of intensity on the response amplitude, the results were most clearly visible at the highest tested intensity level (70 dB SPL). The nonlinear classification models, run for each participant separately, yielded above-chance classification accuracy (sensitivity, specificity, and positive predictive value above 0.5) in 76% of infants and in 75% of adults. Accuracy further improved when only the first six trials at each intensity level were analyzed. However, accuracy was similar when pupil data were randomly attributed to the target or standard categories, indicating over-sensitivity of the proposed algorithms to the regularities in the PDR at the individual level. No differences in the classification accuracy were found between infants and adults at the group level, nor between the discrimination and detection paradigms (experiment 2a versus 1b), whereas the results in experiment 2b (speech stimuli) outperformed those in experiment 1b (tone stimuli).</p><p><strong>Conclusions: </strong>The study confirms that PDR is elicited in both infants and adults across different stimulus types and task paradigms and may thus serve as an indicator of auditory attention. However, for the estimation of the hearing (or comfortable listening) threshold at the individual level, the most efficient and time-effective protocol with the","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1111-1124"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-01-16DOI: 10.1097/AUD.0000000000001636
Lisa R Park, Margaret E Richter, Erika B Gagnon, Shannon R Culbertson, Lillian W Henderson, Margaret T Dillon
{"title":"Benefits of Cochlear Implantation and Hearing Preservation for Children With Preoperative Functional Hearing: A Prospective Clinical Trial.","authors":"Lisa R Park, Margaret E Richter, Erika B Gagnon, Shannon R Culbertson, Lillian W Henderson, Margaret T Dillon","doi":"10.1097/AUD.0000000000001636","DOIUrl":"10.1097/AUD.0000000000001636","url":null,"abstract":"<p><strong>Objectives: </strong>This study was designed to (1) compare preactivation and postactivation performance with a cochlear implant for children with functional preoperative low-frequency hearing, (2) compare outcomes of electric-acoustic stimulation (EAS) versus electric-only stimulation (ES) for children with versus without hearing preservation to understand the benefits of low-frequency acoustic cues, and (3) to investigate the relationship between postoperative acoustic hearing thresholds and performance.</p><p><strong>Design: </strong>This was a prospective, 12-month between-subjects trial including 24 pediatric cochlear implant recipients with preoperative low-frequency functional hearing. Participant ages ranged from 5 to 17 years old. They were recruited at their device activation and fit with EAS or ES based on their postoperative thresholds. Group outcomes were compared for single-word recognition, masked sentence recognition, perceived hearing abilities, speech production, receptive language, expressive language, and prosodic identification.</p><p><strong>Results: </strong>Children experienced improvements in word recognition, subjective hearing, speech production, and expressive language with EAS or ES as compared with their preoperative abilities. Children using EAS performed better on a prosodic identification task and had higher subjective hearing scores postactivation as compared with children using ES. There was a significant relationship between postoperative thresholds at 125 Hz and prosodic identification.</p><p><strong>Conclusions: </strong>The results of this study support cochlear implantation for children with normal-to-moderate low-frequency hearing thresholds and severe-to-profound high-frequency hearing loss. They also highlight the benefits of postoperative hearing preservation for language development.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"941-951"},"PeriodicalIF":2.8,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-03-18DOI: 10.1097/AUD.0000000000001655
Duo-Duo Tao, Yuhui Fan, John J Galvin, Ji-Sheng Liu, Qian-Jie Fu
{"title":"Effects of Masker Intelligibility and Talker Sex on Speech-in-Speech Recognition by Mandarin Speakers Across the Lifespan.","authors":"Duo-Duo Tao, Yuhui Fan, John J Galvin, Ji-Sheng Liu, Qian-Jie Fu","doi":"10.1097/AUD.0000000000001655","DOIUrl":"10.1097/AUD.0000000000001655","url":null,"abstract":"<p><strong>Objectives: </strong>Speech perception develops during childhood, matures in early adulthood, and declines in old age. Everyday listening environments often contain competing sounds that may interfere with the perception of the signal of interest. With competing speech, listeners often experience informational masking, where the intelligibility and acoustic characteristics (e.g., talker sex differences) of the maskers interfere with understanding of target speech. Across the lifespan, utilization of segregation cues in competing speech is not well understood. Furthermore, there is a dearth of research regarding speech-in-speech recognition across the lifespan in speakers of tonal languages such as Mandarin Chinese.</p><p><strong>Design: </strong>Speech recognition thresholds (SRTs) were measured in listeners with age-adjusted normal hearing; the age range of participants was 5 to 74 years old. All participants were native speakers of Mandarin Chinese. SRTs were measured in the presence of two-talker Forward or Reverse speech maskers where the masker sex was the same as or different from the target.</p><p><strong>Results: </strong>In general, SRTs were highest (poorest) with the Forward same-sex maskers and lowest (best) with the Reverse different-sex maskers. SRT data were analyzed for 5 age groups: child (5 to 9 years), youth (10 to 17 years), adult (18 to 39 years), middle-aged (40 to 59 years), and elderly (60 to 74 years). Overall, SRTs were significantly higher for the child group than for the youth, adult, middle-aged, and elderly groups ( p < 0.05), and significantly higher for the elderly than for the adult group ( p < 0.05). There was a significant interaction among age group, speech direction, and talker sex cues, where SRTs were significantly higher for Forward than for Reverse speech, and significantly higher for same-sex than for different-sex maskers for all age groups ( p < 0.05), except for the child group.</p><p><strong>Conclusions: </strong>Consistent with previous studies with non-tonal language speakers, the present SRTs with tonal language speakers were best in the adult group and poorest in the child and elderly groups. The child and youth groups demonstrated greater masking release with Reverse speech than with different-sex maskers, while the elderly group exhibited greater release with the different-sex maskers than with Reverse speech. This pattern of results may reflect developmental effects on utilization of talker sex cues in children; in older adults, enhanced top-down processes may compensate for the age-related declines in processing of temporal envelope and temporal fine structure information.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1085-1094"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12207858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-02-05DOI: 10.1097/AUD.0000000000001637
Henry J Adler
{"title":"Language Complexities for Deaf and Hard of Hearing Individuals in Their Pursuit of a Career in Science, Technology, Engineering, Mathematics, and Medicine: Perspectives From an LSL/ASL User.","authors":"Henry J Adler","doi":"10.1097/AUD.0000000000001637","DOIUrl":"10.1097/AUD.0000000000001637","url":null,"abstract":"<p><p>A research scientist with 35 years of experience in the field of hearing research, the author writes that his own experiences have provided a perspective that may be valuable for both future d/Deaf and Hard of Hearing (D/HH) individuals and their peers with typical hearing in their pursuit of a career in Science, Technology, Engineering, Mathematics, and Medicine (STEMM). The author first describes the role of Hearing Inclusive-Association for Research in Otolaryngology in enhancing inclusivity and accessibility for D/HH scientists in the field of Hearing Research. Second, the challenges faced by D/HH scientists arise from the difficulties of working with peers with typical hearing, resulting in less inclusivity and accessibility for the D/HH scientists. The next section deals with solutions to these challenges, including American Sign Language interpreters, websites that give advice on inclusivity and accessibility, and technological advances such as assistive listening devices and smartphones with a capacity for auto captioning. The solutions, however, are fraught with issues such as limited budgets and misperception. Fourth, the author argues that the experiences necessary for a career in STEMM may require a higher-than-expected degree of collaboration with peers with typical hearing outside the laboratory. Finally, studies on successful D/HH scientists in STEMM fields should include experiences of obtaining successful research funding.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"851-855"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-01-31DOI: 10.1097/AUD.0000000000001633
Gregory M Ellis, Rebecca Bieber, Alyssa Davidson, LaGuinn Sherlock, Michele Spencer, Douglas Brungart
{"title":"Improving the Predictive Strength of Better-Ear Four-Frequency Pure-Tone Average With the Addition of the Tinnitus and Hearing Survey-Hearing Subscale.","authors":"Gregory M Ellis, Rebecca Bieber, Alyssa Davidson, LaGuinn Sherlock, Michele Spencer, Douglas Brungart","doi":"10.1097/AUD.0000000000001633","DOIUrl":"10.1097/AUD.0000000000001633","url":null,"abstract":"<p><strong>Objectives: </strong>The objective of this project was to quantify the relative efficacy of a four-frequency pure-tone average in the better ear (PTA4), the Hearing subscale of the Tinnitus and Hearing Survey (THS-H), and a combination of the two in predicting speech-in-noise performance, hearing aid recommendation, and hearing aid use among United States service members (SMs).</p><p><strong>Design: </strong>A two-analysis retrospective study was performed. The first analysis examined the degree to which better-ear PTA4 alone, THS-H alone, and better-ear PTA4 in conjunction with THS-H predicted performance on a speech-in-noise test, the modified rhyme test. Three binomial mixed-effects models were fitted using better-ear PTA4 alone, THS-H alone, and both measures as primary predictors of interest. Age and sex were included as covariates in all models. The models were compared to one another using Chi-square goodness-of-fit tests and the best-fitting model was examined. Data from 5988 SMs were analyzed in the first analysis. The second analysis examined the degree to which better-ear PTA4 alone, THS-H alone, and better-ear PTA4 in conjunction with THS-H predicted two hearing aid-related outcomes: recommendation for hearing aids by a clinician and hearing aid use. Three receiver operating characteristic curves were fit for each question for better-ear PTA4 alone, THS-H alone, and better-ear PTA4 + THS-H. The area under the curve was bootstrapped to generate confidence intervals to compare the three measures. Data from 8001 SMs were analyzed in the second analysis.</p><p><strong>Results: </strong>In the first analysis, all three models explained more variance than chance; however, the better-ear PTA4 + THS-H model was a significantly better fit than either the better-ear PTA4 alone or the THS-H alone models. Significant main effects of better-ear PTA4 and THS-H indicated that proportion correct decreased as better-ear PTA4 and THS-H increased. A significant interaction was observed such that proportion correct decreased more rapidly if both better-ear PTA4 and THS-H were increasing in tandem. In the second analysis, better-ear PTA4 + THS-H showed good predictive discrimination of a prior hearing aid recommendation. For predicting hearing aid use, better-ear PTA4 was the only predictor with an area under the curve bootstrapped confidence interval that overlapped 0.50, indicating better-ear PTA4 alone is a chance predictor for hearing aid use. Both THS-H alone and better-ear PTA4 + THS-H predicted hearing aid use better than chance, but had poor discrimination overall.</p><p><strong>Conclusions: </strong>Adding the THS-H to the better-ear PTA4 improves predictions of speech intelligibility in noise, has good predictive strength for hearing aid recommendations, and predicts hearing aid use better than chance. This study provides evidence for using surveys in conjunction with objective data when evaluating hearing ability and recommending int","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"909-921"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143069877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-06-16DOI: 10.1097/AUD.0000000000001646
Margaret Cychosz, Chiara Scarpelli, Jihyun Stephans, Ana Marija Sola, Kayla Kolhede, Rebecca Ramirez, Erin Christianson, Vincci Chan, Dylan K Chan
{"title":"Rapid Increases in Children's Spontaneous and Responsive Speech Vocalizations Following Cochlear Implantation: Implications for Spoken Language Development.","authors":"Margaret Cychosz, Chiara Scarpelli, Jihyun Stephans, Ana Marija Sola, Kayla Kolhede, Rebecca Ramirez, Erin Christianson, Vincci Chan, Dylan K Chan","doi":"10.1097/AUD.0000000000001646","DOIUrl":"10.1097/AUD.0000000000001646","url":null,"abstract":"<p><strong>Objectives: </strong>Cochlear implants are the most effective means to provide access to spoken language models for children with severe to profound deafness. In typical development, spoken language emerges gradually as children vocally explore and interact with caregivers. But it is unclear how early vocal activity unfolds after children gain access to auditory signals, and thus spoken language, via cochlear implants, and how this early vocal exploration predicts children's spoken language development. This longitudinal study investigated how two formative aspects of early language-child speech productivity and caregiver-child vocal interactions-develop following cochlear implantation, and how these aspects impact children's spoken language outcomes.</p><p><strong>Design: </strong>Data were collected via small wearable recorders that measured caregiver-child communication in the home pre- and for up to 3 years post-implantation (N = 25 children, average = 167 hours/child, 4,180 total hours of observation over an average of 11 unique days/child). Spoken language outcomes were measured using the Preschool Language Scales-5. Growth trajectories were compared with a normative sample of children with typical hearing (N = 329).</p><p><strong>Results: </strong>Even before implantation, all children vocalized and vocally interacted with caregivers. Following implantation, child speech productivity ( β = 9.67, p < 0.001) and caregiver-child vocal interactions ( β = 12.65, p < 0.001) increased significantly faster for children with implants than younger, hearing age-matched typical hearing controls, with the fastest growth occurring in the time following implant activation. There were significant, positive effects of caregiver-child interaction on children's receptive, but not expressive, spoken language outcomes.</p><p><strong>Conclusions: </strong>Overall, children who receive cochlear implants experience robust growth in speech production and vocal interaction-crucial components underlying spoken language-and they follow a similar, albeit faster, developmental timeline as children with typical hearing. Regular vocal interaction with caregivers in the first 1 to 2 years post-implantation reliably predicts children's comprehension of spoken language above and beyond known predictors such as age at implantation.</p>","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"1029-1043"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear and HearingPub Date : 2025-07-01Epub Date: 2025-02-26DOI: 10.1097/AUD.0000000000001643
Shae D Morgan, Erin M Picou, Elizabeth D Young, Samantha J Gustafson
{"title":"Relationship Between Auditory Distraction and Emotional Dimensionality for Non-Speech Sounds.","authors":"Shae D Morgan, Erin M Picou, Elizabeth D Young, Samantha J Gustafson","doi":"10.1097/AUD.0000000000001643","DOIUrl":"10.1097/AUD.0000000000001643","url":null,"abstract":"<p><strong>Objectives: </strong>If task-irrelevant sounds are present when someone is actively listening to speech, the irrelevant sounds can cause distraction, reducing word recognition performance and increasing listening effort. In some previous investigations into auditory distraction, the task-irrelevant stimuli were non-speech sounds (e.g., laughter, animal sounds, music), which are known to elicit a variety of emotional responses. Variations in the emotional response to a task-irrelevant sound could influence the distraction effect. The goal of this study was to examine the relationship between the arousal (exciting versus calming) or valence (positive versus negative) of task-irrelevant auditory stimuli and auditory distraction. Using non-speech sounds that have been used previously in a distraction task, we sought to determine whether stimulus characteristics of arousal or valence affected word recognition or verbal response times (which serve as a measure of behavioral listening effort). We anticipated that the perceived arousal and valence of task-irrelevant stimuli would be related to distraction from target stimuli.</p><p><strong>Design: </strong>In an online listening task, 19 young adult listeners rated the valence and arousal of non-speech sounds, which previously served as task-irrelevant stimuli in studies of auditory distraction. Word recognition and verbal response time data from these previous studies were reanalyzed using the present data to evaluate the effect of valence or arousal stimulus category on the distraction effect in quiet and in noise. In addition, correlation analyses were conducted between ratings of valence, ratings of arousal, word recognition performance, and verbal response times.</p><p><strong>Results: </strong>The presence of task-irrelevant stimuli affected word recognition performance. This effect was observed generally in quiet and for stimuli rated as exciting (in noise) or calming (in quiet). The presence of task-irrelevant stimuli also affected reaction times. Background noise increased verbal response times by approximately 35 msec. In addition, all task-irrelevant stimuli, regardless of valence or arousal category, increased verbal response times by more than 200 msec relative to the condition with no task-irrelevant stimuli. Valenced stimuli caused the largest distraction effect on response times; there was no difference in the distraction effect on verbal response times based on the stimulus arousal category. Correlation analyses between valence ratings and dependent variables (word recognition and reaction time) revealed that, in quiet, there was a weak, but statistically significant, relationship between valence ratings (absolute deviation from neutral) and word recognition scores; the more valenced a stimulus, the more distracting it was in terms of word recognition performance. This significant relationship between valence and word recognition was not evident when participants completed the","PeriodicalId":55172,"journal":{"name":"Ear and Hearing","volume":" ","pages":"983-996"},"PeriodicalIF":2.6,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143506156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}