Sarah Martineau, Pierre André Ménard, Jackie Gartner-Schmidt, Leah Bernadette Helou, Christine Murphy Estes, Brianna K Hammerle, Juliana K Litts, Marci Rosenberg, Erin Schmura, Sylvie Ratté
{"title":"Advances in the Identification and Agreement About Meta-Therapy for Voice Disorders: A Methodological Paper.","authors":"Sarah Martineau, Pierre André Ménard, Jackie Gartner-Schmidt, Leah Bernadette Helou, Christine Murphy Estes, Brianna K Hammerle, Juliana K Litts, Marci Rosenberg, Erin Schmura, Sylvie Ratté","doi":"10.1044/2026_JSLHR-25-00697","DOIUrl":"https://doi.org/10.1044/2026_JSLHR-25-00697","url":null,"abstract":"<p><strong>Background: </strong>Meta-therapy (MT) is a powerful dialogue-based element of voice therapy that scaffolds patients' cognitive models of treatment. MT dialogues have not historically been taught in an explicit manner, and substantial variability in identifying MT exists within and across clinicians. This low reliability is problematic for empirical research and educational transfer. Harnessing contemporary natural language processing technologies to stress test the concept of MT and its clinical instances may enhance our theoretical and empirical grasp of the construct.</p><p><strong>Method: </strong>To capture crucial therapeutic component parts and MT, 10,443 clinician utterances stemming from conversation training therapy sessions delivered by six expert voice-specialized speech-language pathologists were transcribed and analyzed with a refined annotation framework. Two independent raters annotated each session and reconciled disagreements through adjudication, and the resulting consensus was compared with an expert gold standard. Time distribution was analyzed. Linguistic profiles were derived from bigram frequencies in utterances reaching expert-annotator consensus. Reliability and ambiguity in identification were assessed with multilabel confusion matrices, percentage agreement, Cohen's kappa, and Gwet's agreement coefficient (AC1).</p><p><strong>Results: </strong>Gwet's AC1 indicated substantial-to-almost-perfect agreement for MT in most sessions, outperforming κ and mitigating base-rate artifacts (AC1: up to .98). Mean stand-alone MT duration ranged from 0.32 to 3.88 min per session. Distinctive MT bigrams contrasted with motor practice or psychosocial collocations that typified direct and counseling content, although lexical overlap produced systematic confusions with MT blended with direct and education/indirect labels.</p><p><strong>Conclusions: </strong>The revised annotation model markedly improved the reproducibility of MT identification and revealed its linguistic signature. The model confirmed MT's role as a cross-cutting discourse that integrates with other therapeutic modalities. These advances provide an empirical foundation for machine learning classifiers, formal curriculum content, and further investigation into MT's contribution to treatment efficacy.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.32159277.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-17"},"PeriodicalIF":2.2,"publicationDate":"2026-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147857988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rebecca Hunting Pompon, Helen Mach, Patrycja Puzio
{"title":"A Modified and Validated Resilience Scale for Individuals With Aphasia.","authors":"Rebecca Hunting Pompon, Helen Mach, Patrycja Puzio","doi":"10.1044/2026_JSLHR-25-00621","DOIUrl":"10.1044/2026_JSLHR-25-00621","url":null,"abstract":"<p><strong>Purpose: </strong>Resilience is an underdefined, understudied, yet potentially critical contributor to poststroke aphasia rehabilitation. Resilience measurement is difficult for individuals with communication limitations; therefore, the purpose of this study was to modify and validate a psychometrically robust scale of resilience, the University of Washington Resilience Scale (UWRS), to be maximally accessible for individuals diagnosed with aphasia.</p><p><strong>Method: </strong>The UWRS eight-item short-form modification (with permission) involved panel discussions and cognitive interviews with experts in aphasia, including clinicians and individuals with poststroke aphasia. The resulting verbally and visually simplified scale was then validated with 65 participants with aphasia using scales of similar and related constructs, such as depression, chronic stress, and anxiety. Test-retest reliability was also assessed.</p><p><strong>Results: </strong>Statistically significant associations among the modified scale and scales of similar and related constructs indicated its construct and convergent validity. A test-retest reliability analysis indicated the reliability of the modified scale.</p><p><strong>Conclusions: </strong>The modified UWRS (mUWRS) appears to be a reliable and valid measure of resilience for individuals with aphasia. The mUWRS may be a useful clinical tool and important when used to investigate resilience and its impact on rehabilitation.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2160-2169"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147577603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jenny Vojtech, Talia S Mittelman, Kara M Smith, Defne Abur, Cara E Stepp
{"title":"Aerodynamic and Acoustic Characteristics of Nasal Airflow in Parkinson's Disease.","authors":"Jenny Vojtech, Talia S Mittelman, Kara M Smith, Defne Abur, Cara E Stepp","doi":"10.1044/2026_JSLHR-25-00852","DOIUrl":"10.1044/2026_JSLHR-25-00852","url":null,"abstract":"<p><strong>Purpose: </strong>Velopharyngeal incompetence may contribute to speech difficulties in Parkinson's disease (PD) but has been minimally studied. This study investigated the acoustic and aerodynamic characteristics of nasal airflow in people with and without PD.</p><p><strong>Method: </strong>Twenty adults diagnosed with idiopathic PD and 20 age- and sex-matched controls produced consonant-vowel speech stimuli while wearing a nasal airflow mask and oral microphone. Mean nasal airflow was measured during the 25-ms period immediately preceding consonant release (\"burst airflow\") and over the central 100 ms of each vowel (\"vowel airflow\"). Vocal intensity (dB SPL) was also measured over the center of each vowel.</p><p><strong>Results: </strong>The PD group exhibited significantly higher burst airflow than the control group (7.7 vs. 1.9 cc/s), though vowel airflow did not differ significantly between groups. Vocal intensity was positively associated with burst and vowel nasal airflow only in the PD group, despite comparable mean intensity levels between groups. Within the PD group, disease duration and speech-specific motor scores were significantly correlated with burst airflow, and voice-related quality of life was correlated with vowel airflow.</p><p><strong>Conclusions: </strong>Velopharyngeal dysfunction in PD was more pronounced during rapid motor sequences (stop consonant bursts) than vowel production and showed dynamic motor deterioration under increasing vocal intensities. The intensity-airflow relationship observed in PD suggests compromised velopharyngeal closure during higher vocal demands. Measures of velopharyngeal dysfunction may be useful markers of axial motor symptom severity, which has a large impact on quality of life and prognosis in people with PD.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1993-2006"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147635672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measurement Properties of the Swedish Empowerment Audiology Questionnaire: A Rasch Analysis.","authors":"Moa Yngve, Josefina Larsson, Elin Karlsson","doi":"10.1044/2026_JSLHR-25-00821","DOIUrl":"10.1044/2026_JSLHR-25-00821","url":null,"abstract":"<p><strong>Objectives: </strong>Hearing loss is among the most common chronic conditions worldwide, substantially affecting daily life. Its management typically involves person-centered rehabilitation focusing on hearing aid fitting and active client participation. The Empowerment Audiology Questionnaire (EmpAQ) is a self-report instrument assessing empowerment, available in 15- and five-item versions. The EmpAQ was recently translated into Swedish (EmpAQ SWE), with satisfactory content validity, convergent validity, and reliability. This study aimed to further evaluate the EmpAQ SWE by examining its construct validity.</p><p><strong>Design: </strong>Adults with hearing loss (pure-tone average > 20 dB HL in the better ear) were invited to complete a digital survey (<i>n</i> = 1,176); 152 participants responded. The survey included demographic questions and both EmpAQ-15 and EmpAQ-5. Rasch analysis was applied to assess item targeting, threshold ordering, item fit, differential item functioning (DIF), local dependency (LD), unidimensionality, and reliability.</p><p><strong>Results: </strong>The EmpAQ SWE demonstrated acceptable measurement properties with adequate item targeting, covering most individuals in the sample and distinguishing approximately three clinically meaningful empowerment levels. Unidimensionality was supported, confirming that the instrument measures a single underlying construct. DIF was identified for Item 1 of the EmpAQ-5 between working and nonworking participants, while Item 6 of the EmpAQ-15 showed misfit and weak discrimination. LD was observed within empowerment dimensions but without broader residual correlations. The person separation index (PSI) was 0.77 for the EmpAQ-15 and 0.55 for the EmpAQ-5. Mean person locations were 1.6 (<i>SD</i> = 1.1) and 1.1 (<i>SD</i> = 1.3), respectively.</p><p><strong>Conclusions: </strong>The EmpAQ SWE demonstrated acceptable psychometric properties, supporting its validity for assessing self-reported empowerment among individuals with hearing loss in Sweden. Findings align with results from the original English version, reinforcing its clinical relevance. Future research should address item-level refinements to further enhance the instrument.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2355-2364"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147694760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Revisiting Consonant Acquisition in Typically Developing Chinese-Speaking Children With Insights Into a Multiword Data Set of Hearing and Deaf/Hard of Hearing Children.","authors":"Shu-Chuan Tseng","doi":"10.1044/2026_JSLHR-25-00124","DOIUrl":"10.1044/2026_JSLHR-25-00124","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to advance the understanding of consonant acquisition with quantitative and qualitative evidence from various groups of Chinese-speaking children. Normative patterns of phonological development of consonants were affirmed by utilizing phoneme transcription and perceptual judgment of a single-word normative data set, followed by analyses of comparable characteristics of a multiword data set of hearing and deaf/hard of hearing children.</p><p><strong>Method: </strong>The single-word normative data set comprised 798 typically developing Chinese-speaking children, whereas the multiword data set consisted of 79 normal hearing and 45 deaf/hard of hearing children. The percentage of consonants correct (PCC) was derived from phonemes transcribed by automatic alignment and human verification. Perceptual acceptability/intelligibility ratings include the percentage of correctly produced words (AccWord) in the normative data set and the intelligibility scores (IntScore) in the multiword data set. Distribution and correlation of PCC and AccWord/IntScore, as well as consonant error patterns, were examined and compared.</p><p><strong>Results: </strong>Developmental patterns and phonological aspects of consonant acquisition in Chinese-speaking children were thoroughly reported. PCC was significantly correlated with AccWord/IntScore across all subject groups in both single-word and multiword data sets. This finding suggested that PCC can indicate speech performance above the phoneme level. In all subject groups, stopping errors occurred more frequently than frication, the accuracy rates of retroflex sounds were low, and there was a mixed use of /n, l, ʐ/.</p><p><strong>Conclusions: </strong>The current study featured developmental growth curves, error analysis, and possible clinical applications of a wordlist-based normative data set as reference standards. The fact that PCC is correlated with acceptability/intelligibility ratings across data sets and subject groups supports its efficacy as a quantitative indicator of child speech assessment.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1920-1943"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147694822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mélanie Canault, Jennifer Krzonowski, Rémi Anselme, Sophie Kern
{"title":"Articulation Rate Between 2 and 4 Years of Age in French Children's Spontaneous Speech.","authors":"Mélanie Canault, Jennifer Krzonowski, Rémi Anselme, Sophie Kern","doi":"10.1044/2026_JSLHR-25-00507","DOIUrl":"https://doi.org/10.1044/2026_JSLHR-25-00507","url":null,"abstract":"<p><strong>Aim: </strong>This study reports age-related changes in articulation rate (syllables [SPS] and phones per second [PPS]) during spontaneous speech in French children aged 2-4 years and provides baseline values for this age range.</p><p><strong>Method: </strong>In this cross-sectional study, spontaneous speech (4,361 utterances) from 91 French children was collected. The articulation rate was calculated in SPS and PPS and was observed as a function of seven age groups. The distribution of articulation rate values for utterances in SPS and PPS by percentiles is provided.</p><p><strong>Results: </strong>In line with previous studies, the results confirm that the articulation rate increased with age. It increased from 3.14 to 3.88 SPS between 22 and 50 months, and from 6.06 to 8.31 PPS with significant changes emerging at 38-41 months. The results also indicate that the number of sounds per syllable increased significantly with age and that the growth in syllable structure complexity preceded that of articulation rate in SPS.</p><p><strong>Conclusions: </strong>This study adds to the already available benchmarks for articulation rate by providing new data for French preschoolers. Further studies are still needed to understand what other factors (e.g., cognitive, linguistic and spontaneous speech styles) may be involved in the growth of articulation rate during development.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-20"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147849167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discovering Emotion in a Cocktail Party: How Emotional Learning Shapes Neural Dynamics in Speech-on-Speech Masking.","authors":"Lingxi Lu, Xiaohan Bao, Li Zheng, Lu Luo","doi":"10.1044/2026_JSLHR-25-00844","DOIUrl":"10.1044/2026_JSLHR-25-00844","url":null,"abstract":"<p><strong>Purpose: </strong>Under a noisy environment such as a cocktail party, emotional signals play a crucial role in helping listeners unmask target speech. However, it remains unclear how emotional features carried in a speaker's vocal timbre shape neural processing over time. This study aimed to characterize the temporal neural dynamics of learned emotion with a speaker's voice in complex listening conditions.</p><p><strong>Method: </strong>We employed an emotional learning paradigm in a speech-on-speech context, pairing two different target speakers with either angry or neutral facial expressions. Electroencephalogram data were recorded from healthy participants, and multivariate pattern analysis combined with representational similarity analysis was used to track the temporal unfolding of learned emotion linked to the target speaker's voice.</p><p><strong>Results: </strong>We observed early neural signatures of emotional processing between 150 and 180 ms after stimulus onset, occurring nearly simultaneously with the decoding of speaker identity. Importantly, brain-behavior analysis revealed that subjective emotional valence ratings could be decoded from neural signals as early as 94 ms. These findings suggest that vocal emotion can be processed rapidly and in a way relatively independent to the process of low-level acoustic cues.</p><p><strong>Conclusion: </strong>Our study provides evidence that acquired emotional associations with a speaker's voice can shape early-stage neural dynamics during speech processing under challenging listening conditions.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.31842814.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1944-1954"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147597596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Longitudinal Acoustic Analysis of /s/ in Head and Neck Cancer Patients Treated With Surgery.","authors":"Gillian de Boer, Daniel Aalto","doi":"10.1044/2026_JSLHR-25-00525","DOIUrl":"10.1044/2026_JSLHR-25-00525","url":null,"abstract":"<p><strong>Purpose: </strong>Oral and oropharyngeal cancer and its treatment can have a devastating impact on speech. The goal of this study is to characterize the changes in English sibilant /s/ production associated with resection site and the sex and age of the patients following surgical removal of oral and oropharyngeal tumors.</p><p><strong>Method: </strong>The acoustics of 4,371 productions of /s/ from read continuous speech of 89 patients (66 men, 23 women) with an mean age of 58.2 years (range: 22-82) were analyzed before and after surgery for oral and/or oropharyngeal cancer. The center of gravity (COG) of the fricative power spectrum was analyzed with a linear mixed-effects model with assessment time (pre-operative and 1, 6, and 12 months postoperative), age, sex, and proportion of resections (%) within oral and pharyngeal structures as fixed effects and random intercepts for speaker and phonetic context.</p><p><strong>Results: </strong>Before surgery, male sex and older age were associated with lower COG. After surgery, COG was reduced with partial recovery at 1 year and dropped more for females than males. Overall, recovery was better among those who did not have radiation. At 1 year, the COG of /s/ was most impacted by resections to the tongue (without radiation), followed by resections to the velopharyngeal mechanism (with radiation). The additional effect of radiation treatment was modulated by age.</p><p><strong>Conclusions: </strong>The results suggest partial recovery of speech function at 1 year. The recovery was gendered with females remaining further away from the pretreatment values after surgery compared to the males.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.31953024.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2007-2019"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147701936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jerker Rönnberg, Erik Marsja, Henrik Danielsson, Emil Holmer, Örjan Dahlström
{"title":"Beyond Accuracy: Cognitive Speed in Speech-in-Noise Processing and Its Implications for the Ease of Language Understanding Model.","authors":"Jerker Rönnberg, Erik Marsja, Henrik Danielsson, Emil Holmer, Örjan Dahlström","doi":"10.1044/2026_JSLHR-25-00773","DOIUrl":"10.1044/2026_JSLHR-25-00773","url":null,"abstract":"<p><strong>Purpose: </strong>The ease of language understanding (ELU) model predicts two processing streams-fast, implicit prediction and slower, explicit postdiction-challenging single-factor \"general speed\" accounts. The present study examined whether cognitive speed is unitary or fractionated, whether both speed and accuracy predict speech in noise (SPIN), and how SPIN relates to rapid automatized naming (RAN).</p><p><strong>Method: </strong>Adults with normal hearing or hearing loss (<i>n</i> = 303) from the N200 study completed latency-based tasks indexing long-term memory access, executive control, Working Memory Tests, SPIN, and RAN. Exploratory and confirmatory factor analyses were used.</p><p><strong>Results: </strong>Two speed factors emerged-long-term memory access speed and executive speed-contradicting a general speed model. Speed-SPIN associations were small and clearest in easier listening conditions. Executive speed predicted RAN. SPIN correlated with RAN, especially among hard of hearing participants.</p><p><strong>Conclusions: </strong>Cognitive speed fractionates in line with the ELU. Working memory speed, unlike accuracy, does not drive SPIN. Executive speed selectively predicts RAN, and SPIN relates to RAN, supporting SPIN's potential as an early proxy for cognitive decline.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2303-2322"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147694727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erin M Ingvalson, Tina M Grieco-Calub, Mark VanDam, Lynn K Perry
{"title":"Responses and Nonresponses in a Bound Morpheme Elicitation Task by Deaf and Hard of Hearing Children.","authors":"Erin M Ingvalson, Tina M Grieco-Calub, Mark VanDam, Lynn K Perry","doi":"10.1044/2026_JSLHR-25-00588","DOIUrl":"10.1044/2026_JSLHR-25-00588","url":null,"abstract":"<p><strong>Purpose: </strong>We aimed to explore the rates of bound morpheme production at two time points (T1 and T2) by deaf and hard of hearing (DHH) preschoolers and their typically hearing (TH) peers. We further sought to describe the rates and types of unscorable responses children produced.</p><p><strong>Method: </strong>Sixty-four DHH preschoolers and 66 TH preschoolers participated as part of a larger, ongoing longitudinal study. Children were given the Test of Early Grammatical Impairment (TEGI) screener, which elicits productions of the third-person singular present and past tense. TEGI screeners were given twice, spaced 6 months apart.</p><p><strong>Results: </strong>TH children produced significantly more singular present-tense and regular past-tense morphemes than cochlear implant (CI)-using children at both time points; hearing aid-using children were not significantly different from TH or CI users. All children were more accurate with the regular past tense at T2 than at T1. No interactions were significant. Examining the types of unscorable responses indicated that the DHH children were more likely to echo the prompt than TH children, particularly at T1.</p><p><strong>Conclusions: </strong>Assessments that elicit bound morpheme productions may not best capture DHH children's morphological sensitivity. When language samples are not feasible, receptive tasks may be a good alternative to probe children's knowledge.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"2209-2218"},"PeriodicalIF":2.2,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147635669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}