{"title":"Implications of Linguistic Convergence and Divergence Among Matched and Mixed Autistic and Non-Autistic Communication Partners.","authors":"Morgan Jameson, Allison Bean","doi":"10.1044/2025_JSLHR-24-00827","DOIUrl":"10.1044/2025_JSLHR-24-00827","url":null,"abstract":"<p><strong>Purpose: </strong>Linguistic entrainment (i.e., increasing linguistic similarity over time) and its positive social effects are well documented among non-autistic communicators. This study sought to, first, investigate the extent of syntactic and semantic entrainment between communication partners with matched or mixed autism status (i.e., autistic and non-autistic) and, second, explore how entrainment influences rapport development for autistic and non-autistic communicators.</p><p><strong>Method: </strong>Thirty-three autistic adults and 37 non-autistic adults were paired in either a matched or mixed condition. Pair interactions, involving two structured communication tasks (Twenty Questions and tangram identification) via videoconference, were transcribed and analyzed for syntactic and semantic entrainment. Participants also completed a survey about the rapport they experienced in the interaction.</p><p><strong>Results: </strong>Matched autistic pairs exhibited greater overall syntactic convergence across the full interaction than matched non-autistic pairs, although no significant group differences emerged at the task level. For both autistic and non-autistic participants, greater syntactic convergence was associated with stronger rapport development. For semantic convergence, the results differed: Mixed pairs significantly diverged from each other. In contrast, matched pairs showed no significant change in semantic alignment-neither converging nor diverging.</p><p><strong>Conclusions: </strong>The findings suggest that autistic communicators engage in effective linguistic entrainment through a cumulative alignment process that unfolds over the full interaction rather than within isolated tasks. This study challenges common assumptions about autistic communication based on mixed interactions and highlights the importance of considering matched-autistic communication contexts. It also supports the double empathy theory, which emphasizes mutual understanding between neurotypes, and suggests that autistic communicators possess unique strengths in interpersonal communication.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4809-4828"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amber L Faircloth, Molly M Jacobs, Patrick M Briley
{"title":"Social Skills and Connectedness in School-Age Children From Vulnerable Backgrounds Who Stutter.","authors":"Amber L Faircloth, Molly M Jacobs, Patrick M Briley","doi":"10.1044/2025_JSLHR-24-00799","DOIUrl":"10.1044/2025_JSLHR-24-00799","url":null,"abstract":"<p><strong>Purpose: </strong>Exploring the psychosocial experiences of school-age children from vulnerable backgrounds who stutter allows for a better understanding of the compounding impacts of stuttering and challenging familial factors.</p><p><strong>Method: </strong>Data were drawn from Wave 5 of the Future of Families and Child Wellbeing Study (FFCWS). This study evaluated social adaptability among children from vulnerable backgrounds who do stutter (CVBWS) and children from vulnerable backgrounds who do not stutter (CVBWNS) using two scales: the Social Skills Rating System (SSRS) and the Connectedness at School Scale (CSS). A comparison of means and regression analyses were used to compare the groups controlling for heterogeneity in diverse demographics. This study utilized survey-specific analytic tools in SAS 9.4 that account for the sampling framework, survey design, and reporting structure of the FFCWS.</p><p><strong>Results: </strong>Of the 3,345 caregivers (unweighted count), 106 reported that their child stuttered or stammered. CVBWS reported lower CSS (2.97, <i>SD</i> = 1.06) than the CVBWNS (3.08, <i>SD</i> = 0.97)-a statistically significant difference (<i>t</i> = 2.51, <i>p</i> = .013). CVBWS also exhibited poorer social skills as indicated by a lower average SSRS rating (48.15 points, <i>SD</i> = 10.99) compared to CVBWNS (54.11 points, <i>SD</i> = 12.96; <i>t</i> = -3.77, <i>p</i> < .001).</p><p><strong>Conclusions: </strong>When working with CVBWS, it is important that baseline and posttreatment measures encompass more than just speech production outcomes. Current findings support this position, as some CVBWS experience more negative social interactions within their school than CVBWNS. Therefore, it is critical that additional attention be paid to the social and emotional development of CVBWS.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4673-4687"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144986440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Temporal Relationships Between Hyoid Burst and Pharyngeal Pressure Events.","authors":"Jilliane Marai F Lagus, Corinne A Jones","doi":"10.1044/2025_JSLHR-24-00782","DOIUrl":"10.1044/2025_JSLHR-24-00782","url":null,"abstract":"<p><strong>Purpose: </strong>This study examined temporal relationships between hyoid burst and pharyngeal pressure events and evaluated how reference point, age, and sex influence pharyngeal swallowing coordination. We hypothesized that (a) latency between hyoid burst and pharyngeal pressure events increases with age, (b) males have longer event latency, and (c) pharyngeal pressure timing is less variable using a manometric reference point than hyoid burst.</p><p><strong>Method: </strong>We analyzed ten 10-ml thin liquid swallows from 104 (42 males) healthy adults (aged 21-89 years) under simultaneous high-resolution pharyngeal manometry and videofluoroscopy. Latency between hyoid burst and pharyngeal pressure events was measured. Latency range was used to describe variability. Repeated-measures analysis of variance assessed age and sex effects on latency from reference points to pharyngeal pressure events.</p><p><strong>Results: </strong>Latency was not affected by age or sex (<i>p</i> ≥ .05). Significant main effects of pressure event on latency were found for hyoid burst and manometric reference point (<i>p</i> < .001), with similar event order. There was a significant Reference Point × Pharyngeal Pressure Event interaction effect for latency range (<i>p</i> = .016); ranges from hyoid burst were more variable than from manometric reference point (<i>p</i> ≤ .02), except from the velopharyngeal maximum pressure time point (<i>p</i> = .92).</p><p><strong>Conclusions: </strong>Relative order and timing of pharyngeal pressure events are not impacted by age or sex, suggesting stability of pressure coordination with age and no sex differences. Videofluoroscopy may be less precise than high-resolution pharyngeal manometry for latency range assessment due to subjectivity and lower temporal resolution.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29991907.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4580-4590"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145002696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perception of Gender in Voices of 8- to 12-Year-Old Children in the Context of Regional Dialect Variation.","authors":"Christopher E Holt, Ewa Jacewicz, Robert A Fox","doi":"10.1044/2025_JSLHR-24-00907","DOIUrl":"10.1044/2025_JSLHR-24-00907","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the perception of gender in voices of older children, around the onset of puberty. The aim was to characterize gender categorization performance of adult and age-matched child listeners under the conditions of noise and talker variability arising from regional dialect variation.</p><p><strong>Method: </strong>A total of 49 participants, 26 adults and 23 children, listened to syllables, read sentences, and phrases from spontaneous conversations produced by 90 children aged 8-12 years, representing three regional varieties of American English spoken in Ohio, North Carolina, and Wisconsin, evenly divided by gender in each dialect group (15 boys, 15 girls). All stimuli were masked by speech-shaped noise and presented over laboratory headphones in a two-alternative forced-choice paradigm. Linear mixed-effects models were used with listener group (adults, children), speaker age in months, dialect, and speaker gender as fixed predictors of listener accuracy; all listeners were from Ohio. Models were also constructed with fundamental frequency as the most prominent perceptual cue to gender.</p><p><strong>Results: </strong>Accuracy was above chance, with children outperforming adults in listening to syllables and spontaneous phrases. Speaker age increased accuracy for boys but not for girls, and fundamental frequency was a stronger accuracy predictor than age for both genders. Feminine and masculine characteristics in children's voices varied with dialect; listeners were sensitive to these variations, showing attunement to the local dialect norms, more so in girls than in boys.</p><p><strong>Conclusions: </strong>Gender of 8- to 12-year-old children can be identified above chance when masked by noise. Age-matched child listeners' performance is adultlike, and children may have enhanced sensitivity to gender in other children's speech. Sociocultural context does influence gender categorization, indicating that characteristics of gendered speech are learned from a local community.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.30104353.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4619-4644"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145133266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elena Giovanelli, Benedetta Desolda, Chiara Valzolgher, Elena Gessa, Tommaso Rosi, Francesco Pavani
{"title":"Metacognitive Awareness of Lipreading Gains in Young and Older Adults.","authors":"Elena Giovanelli, Benedetta Desolda, Chiara Valzolgher, Elena Gessa, Tommaso Rosi, Francesco Pavani","doi":"10.1044/2025_JSLHR-24-00742","DOIUrl":"10.1044/2025_JSLHR-24-00742","url":null,"abstract":"<p><strong>Purpose: </strong>When listening to speech in noise, lipreading can facilitate communication. However, beyond its objective benefits, individuals' perceptions of lipreading advantages may influence their motivation to use it in daily interactions. We investigated to what extent older and younger adults are metacognitively aware of lipreading benefits, focusing not only on performance improvements but also on changes in confidence and listening effort and on the internal evaluations (confidence and effort) that shape listening experiences and may influence strategy adoption.</p><p><strong>Method: </strong>Forty participants completed a hearing-in-noise task in virtual reality, facing a human-like avatar behind a translucent panel that varied in transparency to create pairs of conditions with different lip visibility. We measured audiovisual performance, confidence, and effort, deriving both real improvements (i.e., lipreading gain) and metacognitive improvements (i.e., perceived changes in accuracy, confidence, and effort) on a trial-by-trial basis.</p><p><strong>Results: </strong>Both age groups experienced comparable real improvements from lipreading and were similarly aware of its benefits for accuracy and confidence. Yet, older adults were less sensitive to the reduction of listening effort associated with higher lip visibility, particularly those with lower unisensory lipreading abilities (as measured in a visual-only condition).</p><p><strong>Conclusions: </strong>While younger and older adults share similar awareness of lipreading benefits in speech perception, reduced sensitivity to effort reduction may impact older adults' motivation to use lipreading in everyday communication. Given the role of perceived effort in strategy adoption, these findings highlight the importance of addressing effort perceptions in interventions aimed at improving communication in aging populations.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.30179404.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4720-4735"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145202810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptation and Validation of the Bilingual Code-Switching Profile Into Kannada.","authors":"Kashyap Sahana, Srirangam Vijayakumar Narasimhan, Govinda Yashaswini","doi":"10.1044/2025_JSLHR-24-00214","DOIUrl":"10.1044/2025_JSLHR-24-00214","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed at adapting and validating the Bilingual Code-Switching Profile into Kannada (KBCSP) to assess Code-Switching (CS) in Kannada-English speaking bilinguals. Given the significance of CS in bilingual cognition and linguistic interactions, the study evaluates the reliability and validity of the KBCSP, contributing to the broader understanding of CS behavior in Kannada-speaking populations.</p><p><strong>Method: </strong>A nonrandomized, prospective cross-sectional design with purposive sampling was used. Initially, the Bilingual Code-Switching Profile was adapted into Kannada (KBCSP). The KBCSP was administered to three groups of participants. Group 1 consisted of 100 bilingual participants (first language [L1], Kannada; second language [L2], English). Group 2 consisted of 10 native Kodava-Kannada-speaking multilingual participants, and Group 3 consisted of 10 age- and gender-matched participants who were native Kannada-speaking bilinguals (L1, Kannada; L2, English). Responses from all the participants were tabulated and were subjected to statistical analyses.</p><p><strong>Results: </strong>Results indicated that all the sections of the KBCSP had test-retest reliability scores of more than 0.92, indicating that the KBCSP had excellent test-retest reliability. The internal validity of the KBCSP was determined by using factor analysis, and results yielded six factors with eigenvalues greater than 1, indicating excellent internal consistency. Results also revealed significant differences in language switching, ease of language switching, and attitudes toward language switching between Group 2 and Group 3 participants, indicating good discriminant validity.</p><p><strong>Conclusion: </strong>KBCSP is one of the self-reporting questionnaires with good validity and reliability and therefore provides a structured means to document CS behaviors and patterns, making it a valuable resource for researchers studying Kannada-speaking bilingual adults.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4796-4808"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145066563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenelle Walker, Emma Libersky, Margarita Kaushanskaya
{"title":"The Contribution of Language Ability and Linguistic Factors to Disfluency in Bilingual Children.","authors":"Chenelle Walker, Emma Libersky, Margarita Kaushanskaya","doi":"10.1044/2025_JSLHR-24-00762","DOIUrl":"10.1044/2025_JSLHR-24-00762","url":null,"abstract":"<p><strong>Purpose: </strong>Speech disfluencies are common in individuals who do not stutter, with estimates suggesting a typical rate of six per 100 words. Factors such as language ability, processing load, planning difficulty, and communication strategy influence disfluency. Recent work has indicated that bilinguals may produce more disfluencies than monolinguals, but the factors underlying disfluency in bilingual children are poorly understood.</p><p><strong>Method: </strong>We investigated the child, lexical, and syntactic factors associated with disfluencies in bilingual children who do not stutter during parent-child interactions. Forty-four Spanish-English bilingual parent-child dyads engaged in a play-based interaction. The children were 4-6 years old (<i>M</i> = 62.3 months, <i>SD</i> = 6.96; 19 boys, range: 48.0-71.0). Children's language abilities ranged from clinically low (i.e., with a developmental language disorder) to typical.</p><p><strong>Results: </strong>Analyses revealed that children were more disfluent when they produced longer utterances. There was also a tendency for children with lower language skills to produce more disfluencies than children with higher language skills, when producing longer utterances. However, the mean lexical frequency of each utterance, the language of the utterance, and the child's language dominance were not associated with children's disfluency.</p><p><strong>Conclusions: </strong>These findings inform psycholinguistic models of fluent speech production and indicate that in bilingual children who do not stutter, disfluency is largely a reflection of utterance length. Children's overall language ability rather than language dominance was a more important contributor to disfluency in spontaneous speech. These findings have implications for assessment of bilingual children and for considerations of factors that may support or hinder disfluency in bilingual children who do and do not stutter.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4781-4795"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145025137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenling Jiang, Songcheng Xie, Linjun Zhang, Hua Shu, Yang Zhang
{"title":"Development of Voice Recognition in Relation to Phonological Awareness and Working Memory From Childhood to Adulthood.","authors":"Wenling Jiang, Songcheng Xie, Linjun Zhang, Hua Shu, Yang Zhang","doi":"10.1044/2025_JSLHR-25-00302","DOIUrl":"10.1044/2025_JSLHR-25-00302","url":null,"abstract":"<p><strong>Purpose: </strong>This cross-sectional study investigated the development of voice recognition (VR) from childhood to adulthood and the relationship between VR and two linguistic skills (i.e., phonological awareness [PA] and phonological working memory [PWM]).</p><p><strong>Method: </strong>The participants, comprising 25 children (aged 8-9 years), 25 adolescents (aged 12-13 years), and 26 young adults (aged 18-25 years), were tested on VR and two linguistic skills (PA and PWM).</p><p><strong>Results: </strong>Adults outperformed adolescents, and adolescents outperformed children in all three measures. Additionally, significant correlations between VR and linguistic skills were found only in children and adolescents, but not in adults.</p><p><strong>Conclusions: </strong>VR, PA, and PWM skills are developed throughout childhood and adolescence. Furthermore, there is a maturational trajectory of interplay between voice and linguistic processing that continues into adolescence.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4749-4757"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145042934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Schroeer, Farah I Corona-Strauss, Richard A Morsch, Jorge Bohorquez, Ozcan Ozdamar, Daniel J Strauss
{"title":"Electrophysiological Correlates of Binaural Interaction in Free Field Using Phase-Modulated Speech in Noise.","authors":"Andreas Schroeer, Farah I Corona-Strauss, Richard A Morsch, Jorge Bohorquez, Ozcan Ozdamar, Daniel J Strauss","doi":"10.1044/2025_JSLHR-24-00443","DOIUrl":"10.1044/2025_JSLHR-24-00443","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the applicability of speech-induced binaural beats (SBBs), a phase modulation procedure that can be applied to arbitrary speech signals to generate cortical auditory evoked potentials (CAEPs) as an objective marker of binaural interaction when presented dichotically, in a free-field environment. Furthermore, the effect of speech-shaped masking noise on CAEPs was investigated.</p><p><strong>Method: </strong>Nineteen normal-hearing participants listened to sentences from a sentence matrix test. Sentences were presented from two loudspeakers situated 1 m away to the left and right of the participant. Each sentence contained one SBB and was presented in silence and in three different variations of masking noise: (a) identical noise from the same loudspeakers as the speech signals, (b) modified/phase-modulated noise from the same loudspeakers as the speech signals, and (c) noise presented from a separate loudspeaker placed behind the participants. Additionally, five participants listened to the sentences without noise, with and without one ear occluded, to ascertain the possibility of acoustic interference.</p><p><strong>Results: </strong>CAEPs were successfully recorded in all participants, in the no noise condition and all noise conditions. The presentation of noise from a separate loudspeaker significantly reduced the N1 amplitude. No CAEPs were recorded when one ear was occluded, indicating no contribution of acoustic interference.</p><p><strong>Conclusions: </strong>SBBs can be used to reliably evoke CAEPs as an objective marker of binaural interaction in the free field with masking noise. The advantage of this method is the use of speech material and the possible integration with existing behavioral tests for binaural interaction that utilize speech signals.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.30063004.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4942-4960"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145067421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Frequency Importance Functions in Real-World Noise for Listeners With Typical Hearing and Hearing Loss.","authors":"Erik Jorgensen","doi":"10.1044/2025_JSLHR-25-00040","DOIUrl":"10.1044/2025_JSLHR-25-00040","url":null,"abstract":"<p><strong>Purpose: </strong>This study is a preliminary exploration of frequency importance functions for sentence recognition in real-world noise in listeners with typical hearing and listeners with hearing loss in unaided and aided conditions.</p><p><strong>Method: </strong>Participants with typical hearing (<i>n</i> = 25) and participants with sloping, high-frequency hearing loss (<i>n</i> = 17) repeated back target sentences presented in virtual acoustic scenes (church, cafe, dinner party, and food court) in a trial-by-trial design. Frequency-specific signal-to-noise ratios (SNRs) were calculated for each trial across 35 gammatone filters with center frequencies from 125 to 12000 Hz. Frequency importance was computed by regressing the proportion of keywords correct in each sentence against frequency-specific SNRs.</p><p><strong>Results: </strong>Frequency importance functions differed across environments; however, important frequencies were generally mid-to-high frequency, with the most consistent peak of the importance function observed at 2334 Hz. Participants with typical hearing relied on a greater range of frequencies than those with hearing loss, with differences across groups most evident for high frequencies. Providing additional high-frequency audibility with amplification increased frequency importance.</p><p><strong>Conclusions: </strong>Frequency importance functions are environment dependent and may differ based on the degree of informational or energetic masking in each environment. Frequency importance functions vary across listeners and hearing aid conditions, likely because of differences in audibility. Environmental and listener-specific frequency importance functions can inform hearing aid design and rehabilitation approaches.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.30059296.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4961-4977"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145067430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}