Erin M Ingvalson, Tina M Grieco-Calub, Mark VanDam, Lynn K Perry
{"title":"Responses and Nonresponses in a Bound Morpheme Elicitation Task by Deaf and Hard of Hearing Children.","authors":"Erin M Ingvalson, Tina M Grieco-Calub, Mark VanDam, Lynn K Perry","doi":"10.1044/2026_JSLHR-25-00588","DOIUrl":"10.1044/2026_JSLHR-25-00588","url":null,"abstract":"<p><strong>Purpose: </strong>We aimed to explore the rates of bound morpheme production at two time points (T1 and T2) by deaf and hard of hearing (DHH) preschoolers and their typically hearing (TH) peers. We further sought to describe the rates and types of unscorable responses children produced.</p><p><strong>Method: </strong>Sixty-four DHH preschoolers and 66 TH preschoolers participated as part of a larger, ongoing longitudinal study. Children were given the Test of Early Grammatical Impairment (TEGI) screener, which elicits productions of the third-person singular present and past tense. TEGI screeners were given twice, spaced 6 months apart.</p><p><strong>Results: </strong>TH children produced significantly more singular present-tense and regular past-tense morphemes than cochlear implant (CI)-using children at both time points; hearing aid-using children were not significantly different from TH or CI users. All children were more accurate with the regular past tense at T2 than at T1. No interactions were significant. Examining the types of unscorable responses indicated that the DHH children were more likely to echo the prompt than TH children, particularly at T1.</p><p><strong>Conclusions: </strong>Assessments that elicit bound morpheme productions may not best capture DHH children's morphological sensitivity. When language samples are not feasible, receptive tasks may be a good alternative to probe children's knowledge.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2209-2218"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147635669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyoju Kim, Wi-Jiwoon Kim, Bob McMurray, Dongsun Yim
{"title":"Speech Categorization Consistency Predicts Language and Reading Abilities in Korean School-Age Children.","authors":"Hyoju Kim, Wi-Jiwoon Kim, Bob McMurray, Dongsun Yim","doi":"10.1044/2026_JSLHR-25-00770","DOIUrl":"10.1044/2026_JSLHR-25-00770","url":null,"abstract":"<p><strong>Purpose: </strong>Speech perception continues to develop throughout school age and plays a fundamental role in language and reading development. Recent findings in English-speaking children suggest that speech categorization consistency-the stability of a listener's percept across multiple encounters with a speech sound-predicts both language and reading abilities, with a particularly strong link between reading skills and vowel perception. One hypothesis is that this is due to complex grapheme-phoneme correspondences (GPCs) in English vowels. The present study tested (a) whether the relationship between categorization consistency and language/reading abilities extends to typologically different languages and (b) whether the vowel-specific link observed in English is shaped by GPC complexity, using data from Korean, a language with relatively transparent GPCs.</p><p><strong>Method: </strong>Forty-four first-grade Korean-speaking children completed a visual analog scale task, in which they heard tokens from a speech continuum and rated the correspondence between the stimulus and each word on a continuous scale. Standardized assessments of language and word reading were also conducted.</p><p><strong>Results: </strong>Children with poorer language/reading abilities exhibited lower categorization consistency, which is consistent with the findings from English-speaking children. In contrast to prior findings, however, this relationship was not specific to vowel perception but was held across all contrast types tested. Categorization gradiency (slope of the categorization function) was not significantly associated with any outcome.</p><p><strong>Conclusions: </strong>These findings extend prior work by demonstrating that categorization consistency predicts language and reading abilities, even in a language with transparent GPCs. Importantly, this association was observed across all phonemic contrasts, not just vowels-suggesting that the previously observed vowel-specific link in English may stem from the greater GPC complexity of English vowels. Together, while categorization consistency appears to be a critical predictor of linguistic outcome, the specific pattern may vary across languages depending on the structure of their GPCs.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.31907296.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2046-2066"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147641249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On- and Off-Domain Cognitive Performance of Experienced Hearing Aid Users in Background Noise.","authors":"Devan M Lander, Jodi Baxter, Christina M Roup","doi":"10.1044/2026_JSLHR-25-00572","DOIUrl":"10.1044/2026_JSLHR-25-00572","url":null,"abstract":"<p><strong>Purpose: </strong>Hearing aid use in older adults has been suggested to reduce cognitive load, thereby improving performance on auditory-based cognitive tasks. However, there is limited research regarding how hearing aids impact performance during cognitively demanding auditory and visual tasks. The purpose of this study was to investigate the impact of advanced-level hearing aid use during auditory (on-domain) and visual (off-domain) cognitive tasks in background noise.</p><p><strong>Method: </strong>Thirty-one older adults aged 60-87 years participated in the study. All participants were experienced and satisfied hearing aid users. Participants were fitted with a study hearing aid to ensure consistent signal processing characteristics. A series of six cognitive tasks were completed in quiet and background noise with and without hearing aids. The visual tasks included the Trail Making Test, Stroop Color Word Test, and Size Comparison Span Test. The auditory tasks were the Oral Trail Making Test, Auditory Stroop Task, and Word Auditory Recognition and Recall Measure.</p><p><strong>Results: </strong>Results indicated that, for measures of inhibition, executive function, and attention, there was no significant benefit to the use of hearing aids. In contrast, hearing aid use resulted in better performance on working memory tasks.</p><p><strong>Conclusions: </strong>Results indicated that the benefit from hearing aids for auditory (on-domain) and visual (off-domain) cognitive task performance was mixed. Older adults performed better in quiet than in noise with and without hearing aids. Furthermore, hearing aids were beneficial in quiet environments when the working memory task was auditory (on-domain). The findings from the current study support that the use of hearing aids improves access to working memory in both quiet and noisy conditions, which may ultimately improve speech understanding.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2287-2302"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147694833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lotte Van den Eynde, Ellen Rombouts, Maaike Vandermosten, Inge Zink
{"title":"Identifying Developmental Language Disorder in Bilingual Children Using Narrative Measures.","authors":"Lotte Van den Eynde, Ellen Rombouts, Maaike Vandermosten, Inge Zink","doi":"10.1044/2026_JSLHR-25-00489","DOIUrl":"10.1044/2026_JSLHR-25-00489","url":null,"abstract":"<p><strong>Purpose: </strong>Narrative abilities offer valuable insights into daily-life communication and can aid the complex task of identifying developmental language disorder (DLD) in bilingual children. However, there is limited consensus on which linguistic measures are most informative to distinguish bilingual children with DLD from their typically developing (TD) peers. This study aimed to quantitatively determine which combination of narrative measures most accurately classifies bilingual TD children and bilingual children with DLD.</p><p><strong>Method: </strong>Fifty bilingual TD children and 50 bilingual children with DLD aged 5-9 years who spoke Dutch as a second language participated. Narrative skills were assessed in Dutch using both storytelling and retelling tasks. Eleven measures reflecting narrative productivity, complexity, and accuracy were analyzed across both tasks. Both group-level differences between TD children and children with DLD, and individual diagnostic performance were examined.</p><p><strong>Results: </strong>While most measures accurately distinguished between children with and without DLD at group level, they did not achieve sufficient diagnostic accuracy at the individual level. Nevertheless, combining multiple measures improved classification. The storytelling task reached 82% diagnostic accuracy using four measures, while the retelling task achieved 80% diagnostic accuracy using 10 measures. A combined approach using four measures from both tasks increased accuracy to 85%. The most informative measures were the number of utterances, mean length of utterance, number of different words, and mean number of (grammatical) errors per utterance.</p><p><strong>Conclusion: </strong>Extracting a limited set of measures from narrative tasks can provide both diagnostic effectiveness and clinical feasibility, supporting the inclusion of narrative tasks in bilingual language assessment.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.32035188.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2268-2286"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147795267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elaine R Hitchcock, Laura C Ochs, Jonathan L Preston, Tara K McAllister
{"title":"Online Assessment and Enhancement of Auditory Perception for Residual /ɹ/ Distortions.","authors":"Elaine R Hitchcock, Laura C Ochs, Jonathan L Preston, Tara K McAllister","doi":"10.1044/2026_JSLHR-25-00411","DOIUrl":"10.1044/2026_JSLHR-25-00411","url":null,"abstract":"<p><strong>Purpose: </strong>This study evaluated the effects of computerized perceptual training by incorporating key modifications to address limitations identified in prior research. Training was specifically targeted to children identified as having \"atypical perception,\" considered good candidates for perceptual training.</p><p><strong>Method: </strong>Ten monolingual English-speaking children aged 9;0-11;7 (years;months) with residual speech sound disorder (RSSD) affecting /ɹ/ participated in a multiple-baseline study, completing three to six baselines before receiving twelve 30-min perceptual training sessions. A midpoint production probe was administered immediately after the perceptual training phase to assess preliminary changes in /ɹ/ accuracy. Subsequently, participants engaged in exploratory practice sessions, focused on facilitating rhotic production, followed by three postprobe sessions and a 1-month follow-up. All study activities were conducted online.</p><p><strong>Results: </strong>Visual inspection of individual trajectories and calculation of standardized effect sizes revealed meaningful changes in auditory-perceptual skills following the perception training. The majority of participants exhibited a reduction in categorical labeling consistency, suggesting more consistent categorization of speech stimuli along the /ɹ/-/w/ continuum, and all participants showed increased perceptual accuracy in the identification task. Category goodness ratings of /ɹ/ words also improved for all participants, with effect size values indicating clinically meaningful change.</p><p><strong>Conclusions: </strong>This study demonstrates that online computerized perceptual training holds promise as an effective intervention for improving speech, particularly when tailored to the needs of children with RSSD and atypical speech perception. With the long-term goal of making these tools freely available to clinical practitioners, we aim to empower clinicians to deliver targeted, effective interventions that accelerate progress for clients with RSSD.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.32015043.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2020-2045"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147795301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gloves Are Hands: How English and Italian Children Label Nouns and Predicates at 24 and 30 Months in a Picture-Naming Task.","authors":"Allegra Cattani, Arianna Bello","doi":"10.1044/2026_JSLHR-25-00181","DOIUrl":"10.1044/2026_JSLHR-25-00181","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigates the semantic processes underlying how children acquire and use nouns and predicates (verbs, adjectives/adverbs), focusing on age and cross-linguistic differences in these naming strategies.</p><p><strong>Method: </strong>Ninety-two children aged 23-25 months (53 English and 39 Italian) and 115 children aged 29-31 months (69 English and 46 Italian) took part in a picture-naming task to assess their acquisition of nouns and predicates. We investigated the types of responses (correct, incorrect, no response, and unintelligible) and the distribution of incorrect responses (semantic errors, visual errors, and other errors) across two ages and two languages.</p><p><strong>Results: </strong>Response accuracy increased significantly from 24 to 30 months for lexical categories and languages. At 30 months, children produced fewer no responses, incorrect responses, and unintelligible responses for nouns and fewer no responses for predicates. Italian children showed a higher frequency of unintelligible responses for nouns, while English children produced more no responses for predicates. The distribution of semantically incorrect responses also varied with age: Compared to 24-month-olds, 30-month-olds produced fewer semantic associative errors and onomatopoeic responses in nouns but more semantic coordinate errors for predicates. English children produced more semantic coordinate and subordinate errors in nouns and fewer semantic associative and onomatopoeic errors in predicates than Italian children.</p><p><strong>Conclusion: </strong>Data are discussed in the context of cross-linguistic comparisons of semantic representations underlying noun and predicate acquisition at 2-3 years.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2128-2142"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147583539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adriana A Zekveld, Veerle W Visser, Sophia E Kramer, Jorn Sangers, Cas Smits
{"title":"The Influence of Memory Load, Speech-to-Noise Ratio, and Stimulus Rehearsal on the Pupil Dilation Response: Implications for the Assessment of Listening Effort.","authors":"Adriana A Zekveld, Veerle W Visser, Sophia E Kramer, Jorn Sangers, Cas Smits","doi":"10.1044/2026_JSLHR-25-00164","DOIUrl":"10.1044/2026_JSLHR-25-00164","url":null,"abstract":"<p><strong>Purpose: </strong>Pupillometry has been frequently used to examine the influence of auditory task demand on listening effort. However, the intelligibility effect on the pupil dilation response might be altered under high memory load.</p><p><strong>Method: </strong>We assessed the effects of signal-to-noise ratio (SNR; auditory demand), memory load, and stimulus rehearsal on the pupil dilation response. Twenty-four participants with normal hearing were included (<i>M</i><sub>age</sub> = 22 years, 16 women). Sequences of four or six digits were presented in stationary noise at two auditory demand levels. For either 20% or 80% of the trials, digits were rehearsed. Participants rated listening effort, task difficulty, performance, and tendency to give up.</p><p><strong>Results: </strong>Linear mixed-model analyses indicated that intelligibility was higher for four digits compared to six digits and for lower auditory demand compared to higher auditory demand. The mean pupil dilation was larger for lower auditory demand during listening. In the repetition interval, the peak and mean pupil dilations were larger for lower auditory demand compared to higher auditory demand, for six digits compared to four digits, and for 80% compared to 20% stimulus rehearsal. Subjective listening effort and task difficulty were higher for higher auditory demand than for lower auditory demand and for six digits than for four digits. A lower auditory demand also resulted in higher performance ratings and lower tendency to give up compared to higher auditory demand.</p><p><strong>Conclusions: </strong>The established decrease in the pupil dilation response with decreasing auditory demand (higher SNR) can be altered in tasks with relatively high memory demands. It is important to consider the memory demands imposed by the listening task when assessing the pupil dilation response.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.31974978.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2339-2354"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147701760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen Kuang, Fei Chen, Yuyan Nie, Zengqiang Gou, Jinting Yan, Guanglei Liu
{"title":"Investigating Prosodic Focus Perception and Production in Autism Spectrum Disorder: A Systematic Review and Meta-Analysis.","authors":"Chen Kuang, Fei Chen, Yuyan Nie, Zengqiang Gou, Jinting Yan, Guanglei Liu","doi":"10.1044/2026_JSLHR-25-00625","DOIUrl":"10.1044/2026_JSLHR-25-00625","url":null,"abstract":"<p><strong>Purpose: </strong>Speakers of all human languages use prosodic changes to encode focus, and the ability to perceive and produce prosodic focus is crucial for developing linguistic and communicative skills. This review aims to explore the performance of prosodic focus perception and production among individuals with autism spectrum disorder (ASD) and to identify potential factors contributing to inconsistent findings in previous studies.</p><p><strong>Method: </strong>We conducted a systematic search in three electronic databases and one web search engine to identify peer-reviewed research articles that compared the perception and production of prosodic focus between individuals with ASD and typically developing (TD) individuals. Effect sizes were calculated based on random-effects models. Meta-regression analyses were conducted to assess potential individual and methodological moderators.</p><p><strong>Results: </strong>The comparison of perception accuracy between 441 individuals with ASD and 511 TD individuals revealed that individuals with ASD exhibited impaired prosodic focus perception (Hedges's <i>g</i> = -0.40), with no significant moderators for heterogeneity across studies. Meanwhile, we compared production accuracy between 483 individuals with ASD and 619 TD individuals, finding that production impairments in ASD were more pronounced (Hedges's <i>g</i> = -0.85). The moderator analysis revealed that nonverbal IQ and expressive language skills were significant moderators of production accuracy. Besides, individuals with ASD also exhibited greater pitch variation in prosodic focus production.</p><p><strong>Conclusion: </strong>Overall, these findings suggest that individuals with ASD exhibit greater difficulties in the production of prosodic focus compared to its perception, and increased pitch variation might be a prosodic feature associated with ASD.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.31934088.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2092-2111"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147694800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rita R Patel, Zhaoyan Zhang, Michael Döllinger, Andrew Adeola, Stefan Kniesburges
{"title":"Glottal Area Waveform Measurements for Healthy Female and Male Speakers in Typical, High-Frequency, and Soft Phonation.","authors":"Rita R Patel, Zhaoyan Zhang, Michael Döllinger, Andrew Adeola, Stefan Kniesburges","doi":"10.1044/2026_JSLHR-25-00611","DOIUrl":"10.1044/2026_JSLHR-25-00611","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to examine vocal fold kinematic characteristics associated with typical-frequency and vocal-intensity, high-frequency, and soft-intensity phonation in vocally healthy adults.</p><p><strong>Method: </strong>Glottal area waveform (GAW) was measured from high-speed videoendoscopy in a total of 66 adults (41 women and 25 men) during sustained /i:/ production across the three tasks, resulting in a total of 594 phonations. Statistical analysis of glottal cycle quotients (open quotient [OQ], speed quotient [SQ], rate quotient [RQ], glottal gap index [GGI]), glottal cycle periodicity (amplitude, time periodicity [TP]), glottal cycle symmetry (phase asymmetry index, spatial symmetry index, amplitude symmetry index), normalized maximum area declination rate (MADRn), and amplitude-to-length ratio (ALR) was conducted. Principal component analysis was used to identify laryngeal strategies underlying the three tasks.</p><p><strong>Results: </strong>High frequency and soft intensity resulted in changes in SQ, RQ, MADRn, and ALR in female participants, whereas in male participants, they impacted OQ, RQ, GGI, TP, MADRn, and ALR. High-frequency phonation is primarily achieved through increased cricothyroid muscle activity, while soft intensity is primarily achieved by reduced vocal fold adduction and subglottal pressure with compensatory cricothyroid activation.</p><p><strong>Conclusion: </strong>High-frequency and soft-intensity phonations involve distinct laryngeal adjustments and clinically measurable, sex-dependent changes in GAW, highlighting the need to tailor voice therapy to physiological strategies and sex-based differences.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2067-2082"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147701943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianfen Luo, Lei Xu, Min Wang, Jinming Li, Linda Spencer, Huei-Mei Liu, Ling-Yu Guo
{"title":"The Contribution of Parent Lexical Diversity and Grammatical Complexity to Later Language Outcomes in Mandarin-Speaking Children With Cochlear Implants.","authors":"Jianfen Luo, Lei Xu, Min Wang, Jinming Li, Linda Spencer, Huei-Mei Liu, Ling-Yu Guo","doi":"10.1044/2026_JSLHR-25-00709","DOIUrl":"10.1044/2026_JSLHR-25-00709","url":null,"abstract":"<p><strong>Purpose: </strong>The study examined the relative contribution of parent lexical diversity and grammatical complexity at different time points to subsequent language outcomes in Mandarin-speaking children with cochlear implants (CIs).</p><p><strong>Method: </strong>Participants were 25 Mandarin-speaking children who received cochlear implantation before 30 months of age. At 1 and 2 years after CI activation, we collected language samples that involved the child playing with the parent for 30 min. The parent's number of different words (NDW) and mean length of utterance (MLU) were computed from the free play for each time point. At 3 years after CI activation, we evaluated children's receptive and expressive language outcomes using a standardized language test.</p><p><strong>Results: </strong>Parent NDW and MLU at 1 and 2 years post-CI activation were significantly correlated with children's language scores at 3 years post-CI activation, except that parent MLU at 1 year was not significantly correlated with children's expressive language scores. Regression analyses further revealed that, at 1 year post-CI activation, parent NDW was stronger than parent MLU in accounting for both children's receptive and expressive language scores at 3 years post-CI activation. At 2 years post-CI activation, parent NDW was relatively more important for children's subsequent receptive language scores, whereas parent MLU at 2 years post-CI activation was relatively more important for children's subsequent expressive language scores.</p><p><strong>Conclusions: </strong>The relative contributions of parent lexical diversity (NDW) and grammatical complexity (MLU) to later language outcomes in Mandarin-speaking children with CIs evolved over time, particularly for expressive language. The emphasis on lexical diversity and grammatical complexity in parental input should change with the child's developmental level.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.32014944.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2365-2378"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147795312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}