{"title":"Reliability and Diagnostic Accuracy of Semi-Automated and Automated Acoustic Quantification of Vocal Tremor Characteristics.","authors":"Youri Maryn, Kaitlyn Dwenger, Sidney Kaufmann, Julie Barkmeier-Kraemer","doi":"10.1044/2025_JSLHR-24-00467","DOIUrl":"10.1044/2025_JSLHR-24-00467","url":null,"abstract":"<p><strong>Purpose: </strong>This study compared three methods of acoustic algorithm-supported extraction and analysis of vocal tremor properties (i.e., rate, extent, and regularity of intensity level and fundamental frequency modulation): (a) visual perception and manual data extraction, (b) semi-automated data extraction, and (c) fully automated data extraction.</p><p><strong>Method: </strong>Forty-five midvowel sustained [a:] and [i:] audio recordings were collected as part of a scientific project to learn about the physiologic substrates of vocal tremor. This convenience data set contained vowels with a representative variety in vocal tremor severity. First, the vocal tremor properties in intensity level and fundamental frequency tracks were visually inspected and manually measured using Praat software. Second, the vocal tremor properties were determined using two Praat scripts: automated with the script of Maryn et al. (2019) and semi-automated with an adjusted version of this script to enable the user to intervene with the signal processing. The reliability of manual vocal tremor property measurement was assessed using the intraclass correlation coefficient. The properties as measured with the two scripts (automated vs. semi-automated) were compared with the manually determined properties using correlation and diagnostic accuracy statistical methods.</p><p><strong>Results: </strong>With intraclass correlation coefficients between .770 and .914, the reliability of the manual method was acceptable. The semi-automated method correlated with manual property measures better and was more accurate in diagnosing vocal tremor than the automated method.</p><p><strong>Discussion: </strong>Manual acoustic measurement of vocal tremor properties can be laborious and time-consuming. Automated or semi-automated acoustic methods may improve efficiency in vocal tremor property measurement in clinical as well as research settings. Although both Praat script-supported methods in this study yielded acceptable validity with the manual data measurements as a referent, the semi-automated method showed the best outcomes.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28873088.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2721-2740"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12173159/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sahar Rauf, Sarmad Hussain, Anam Amin, Shumaila Tanveer, Asma Jabeen
{"title":"Development of Urdu Speech Audiometry Material for the Deaf and Hard of Hearing Community.","authors":"Sahar Rauf, Sarmad Hussain, Anam Amin, Shumaila Tanveer, Asma Jabeen","doi":"10.1044/2025_JSLHR-24-00118","DOIUrl":"10.1044/2025_JSLHR-24-00118","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to develop standardized Urdu speech materials for assessing speech recognition threshold (SRT) and word recognition score (WRS) for clinical use in Pakistan.</p><p><strong>Method: </strong>The development of Urdu speech materials followed four key parameters: phonemic coverage, phonetic dissimilarity, familiarity with the participants, and homogeneity in terms of audibility. Bisyllabic words for SRT measurement and monosyllabic words for WRS measurement were selected. The most familiar 50 spondee words and 50 monosyllabic words were selected for the evaluation of SRT and WRS, respectively, in children with normal hearing. Thirty spondee words and 34 monosyllabic words with relatively steep and homogeneous psychometric function slopes were included in the final lists.</p><p><strong>Results: </strong>The mean psychometric function slope at the 50% threshold for the 30 selected spondee words was found to be 9.1%/dB, and for 34 monosyllabic words, it was found to be 6%/dB.</p><p><strong>Conclusions: </strong>Bisyllabic words for SRT measurement and monosyllabic words for WRS measurement were successfully developed and evaluated in Lahore, Pakistan. There is a need for the development of speech audiometry materials in other Pakistani languages.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2900-2914"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bruna S Mussoi, A'Diva Warren, Jordin Benedict, Serena Sereki, Julia Jones Huyck
{"title":"Contributions of Behavioral and Electrophysiological Spectrotemporal Processing to the Perception of Degraded Speech in Younger and Older Adults.","authors":"Bruna S Mussoi, A'Diva Warren, Jordin Benedict, Serena Sereki, Julia Jones Huyck","doi":"10.1044/2025_JSLHR-24-00667","DOIUrl":"10.1044/2025_JSLHR-24-00667","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to evaluate (a) the effect of aging on spectral and temporal resolution, as measured both behaviorally and electrophysiologically, and (b) the contributions of spectral and temporal resolution and cognition to speech perception in younger and older adults.</p><p><strong>Method: </strong>Eighteen younger and 18 older listeners with normal hearing or no more than mild-moderate hearing loss participated in this cross-sectional study. Speech recognition was assessed with the QuickSIN test and six-band noise-vocoded sentences. Frequency discrimination, temporal interval discrimination, and gap detection thresholds were obtained using a three-alternative forced-choice task. Cortical auditory evoked potentials were recorded in response to tonal frequency changes and to gaps in noise. Cognitive testing included nonverbal reasoning, vocabulary, working memory, and processing speed.</p><p><strong>Results: </strong>There were age-related declines on many outcome measures, including speech perception in noise, cognition (nonverbal reasoning, processing speed), behavioral gap detection thresholds, and neural correlates of spectral and temporal processing (smaller P1 amplitudes and prolonged P2 latencies in response to frequency change; smaller N1-P2 amplitudes and longer P1, N1, P2 latencies to temporal gaps). Hearing thresholds and neural processing of spectral and temporal information were the main predictors of degraded speech recognition performance, in addition to cognition and perceptual learning. These factors accounted for 58% of the variability on the QuickSIN test and 41% of variability on the noise-vocoded speech.</p><p><strong>Conclusions: </strong>The results confirm and extend previous work demonstrating age-related declines in gap detection, cognition, and neural processing of spectral and temporal features of sounds. Neural measures of spectral and temporal processing were better predictors of speech perception than behavioral ones.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28883711.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2992-3010"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kelsey E Davison, Talia Liu, Rebecca M Belisle, Tyler K Perrachione, Zhenghan Qi, John D E Gabrieli, Helen Tager-Flusberg, Jennifer Zuk
{"title":"Right-Hemispheric White Matter Organization Is Associated With Speech Timing in Autistic Children.","authors":"Kelsey E Davison, Talia Liu, Rebecca M Belisle, Tyler K Perrachione, Zhenghan Qi, John D E Gabrieli, Helen Tager-Flusberg, Jennifer Zuk","doi":"10.1044/2025_JSLHR-24-00548","DOIUrl":"10.1044/2025_JSLHR-24-00548","url":null,"abstract":"<p><strong>Purpose: </strong>Converging research suggests that speech timing, including altered rate and pausing when speaking, can distinguish autistic individuals from nonautistic peers. Although speech timing can impact effective social communication, it remains unclear what mechanisms underlie individual differences in speech timing in autism.</p><p><strong>Method: </strong>The present study examined the organization of speech- and language-related neural pathways in relation to speech timing in autistic and nonautistic children (24 autistic children, 24 nonautistic children [ages: 5-17 years]). Audio recordings from a naturalistic language sampling task (via narrative generation) were transcribed to extract speech timing features (speech rate, pause duration). White matter organization (as indicated by fractional anisotropy [FA]) was estimated for key tracts bilaterally (arcuate fasciculus, superior longitudinal fasciculus [SLF], inferior longitudinal fasciculus [ILF], frontal aslant tract [FAT]).</p><p><strong>Results: </strong>Results indicate associations between speech timing and right-hemispheric white matter organization (FA in the right ILF and FAT) were specific to autistic children and not observed among nonautistic controls. Among nonautistic children, associations with speech timing were specific to the left hemisphere (FA in the left SLF).</p><p><strong>Conclusion: </strong>Overall, these findings enhance understanding of the neural architecture influencing speech timing in autistic children and, thus, carry implications for understanding potential neural mechanisms underlying speech timing differences in autism.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28934432.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2685-2699"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12173158/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144103018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What Influences Parenting Stress? Examining Parenting Stress and Self-Efficacy Across Groups of Children With Autism Spectrum Disorder, at Risk of Developmental Language Disorder, and With Typically Developing Language.","authors":"Merve Dilbaz-Gürsoy, Ayşın Noyan-Erbaş, Halime Tuna Çak Esen, Ayşen Köse, Esra Özcebe","doi":"10.1044/2025_JSLHR-24-00672","DOIUrl":"10.1044/2025_JSLHR-24-00672","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this study was to examine whether there are differences in parenting stress levels and self-efficacy among children with autism spectrum disorder (ASD), at risk of developmental language disorder (rDLD), and with typically developing language (TDL). The study also investigated the children's language abilities and/or behavioral problems as potential predictors of parents' levels of stress and self-efficacy.</p><p><strong>Method: </strong>The study assessed children's language skills and behavioral problems as well as parental stress and self-efficacy in a sample of 2- to 4-year-old children with ASD (<i>n</i> = 35), rDLD (<i>n</i> = 35), and with TDL (<i>n</i> = 25).</p><p><strong>Results: </strong>The findings of the study revealed that parents of children with ASD experienced the highest level of parenting stress related to child characteristics and the lowest level of self-efficacy, whereas parents of children rDLD had higher parenting stress compared to parents of children with TDL. Furthermore, although behavioral problems were shown to be a predictor that explains parenting stress in all groups, expressive language was identified as a predictor only in the rDLD group. While parental self-efficacy was also found to be predicted by expressive language in the TDL group, it was discovered that self-efficacy affected parenting stress in parents of children with ASD and rDLD.</p><p><strong>Conclusions: </strong>These findings demonstrated that parental stress was a complex phenomenon impacted by several factors. This study may suggest the importance of interventions that aim to decrease parental stress and enhance self-efficacy, going beyond the children's language skills and behavioral problems.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2837-2850"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144016327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Temporal Sensitivity in Patients With Type 1 Diabetes Mellitus and Insights Into Their Everyday Auditory Performance.","authors":"Ozlem Topcu, Süleyman Nahit Sendur, Hilal Dincer D'Alessandro, Merve Ozbal Batuk, Gonca Sennaroglu","doi":"10.1044/2025_JSLHR-24-00554","DOIUrl":"10.1044/2025_JSLHR-24-00554","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to investigate the effects of Type 1 diabetes mellitus (T1DM) on low-frequency (LF) pitch and speech-in-noise perception linked to temporal sensitivity and everyday auditory performance. The relationships between these outcomes and potential confounders, such as diabetes duration, glycemic control, and neuropathy, were also examined.</p><p><strong>Method: </strong>The participants consisted of 18 young patients with T1DM. They were matched with 18 healthy controls based on age, gender, and audiometric thresholds (up to 20 kHz). Measurements included behavioral measures of temporal sensitivity using the low-pass-filtered Word Stress Pattern (WSP-LPF) test and the Hearing in Noise Test (HINT), as well as self-reported measure using the Speech, Spatial and Qualities of Hearing Scale.</p><p><strong>Results: </strong>Patients with T1DM showed significantly poorer performance on both the WSP-LPF (<i>p</i> < .001), and HINT (<i>p</i> = .004) tests compared to healthy controls. Specifically, patients with T1DM showed impaired perception of lexical stress cued by LF pitch and required higher signal-to-noise ratios to effectively perceive speech in complex listening situations. Self-report measures indicated reduced hearing satisfaction in patients with T1DM (<i>p</i> = .001). Statistically significant correlations were found between WSP-LPF and diabetes duration (<i>p</i> = .021).</p><p><strong>Conclusions: </strong>The present findings reveal that T1DM negatively affects the perception of lexical stress and speech-in-noise performance, reflecting disruptions in temporal sensitivity. These impairments are present even in patients with normal audiometric thresholds, and addressing these deficits may be crucial for improving auditory function and developing targeted interventions.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2915-2928"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144027915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rosanne Abrahamse, Titia Benders, Katherine Demuth, Nan Xu Rattanasone
{"title":"Investigating the Effects of Speaking Rate on Spoken Language Processing in Children Who Are Deaf and Hard of Hearing.","authors":"Rosanne Abrahamse, Titia Benders, Katherine Demuth, Nan Xu Rattanasone","doi":"10.1044/2025_JSLHR-24-00108","DOIUrl":"10.1044/2025_JSLHR-24-00108","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to investigate how hearing loss affects (a) spoken language processing and (b) processing of faster speech in school-age children who are deaf and hard of hearing (DHH).</p><p><strong>Method: </strong>Spoken language processing was compared in thirty-six 7- to 12-year-olds who are DHH and 31 peers with normal hearing using a word detection task. Children listened for a target word in sentences presented at a normal (4.5 syllables per second [syll./s]) versus fast (6.1 syll./s) speaking rate and pressed a key when they heard the word in the sentence. Response time was taken as an outcome measure. Relationships between working memory capacity, vocabulary size, and processing speed were also assessed.</p><p><strong>Results: </strong>Children who are DHH were slower than their peers with normal hearing to detect words in sentences, but no evidence for a negative effect of speaking rate was observed. Furthermore, contrary to expectation, a larger working memory capacity was associated with slower spoken language processing, with effects stronger for younger children with smaller vocabulary sizes.</p><p><strong>Conclusions: </strong>Regardless of speaking rate, children who are DHH may be at risk for delays in spoken language processing relative to peers with normal hearing. These delays may have consequences for their access to learning and communication in spoken forms in everyday environments, which contain additional challenges such as background noise, competing talkers, and speaker variability.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28842611.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2959-2977"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristi Hendrickson, Nadine Lee, Elizabeth A Walker, Meaghan Foody, Philip Combiths
{"title":"Assessing Perceptual Difficulty Across Speech Sound Categories and Contrasts to Optimize Minimal Pair Training.","authors":"Kristi Hendrickson, Nadine Lee, Elizabeth A Walker, Meaghan Foody, Philip Combiths","doi":"10.1044/2025_JSLHR-24-00254","DOIUrl":"10.1044/2025_JSLHR-24-00254","url":null,"abstract":"<p><strong>Purpose: </strong>Utilizing psycholinguistic methods, this article aims to ascertain the perceptual difficulty associated with distinguishing between different speech sound categories and individual contrasts within those categories, with the ultimate goal of informing the use of minimal pair contrasts in perceptual training.</p><p><strong>Design: </strong>Using eye-tracking in the Visual World Paradigm, adults with normal hearing (<i>N</i> = 30) were presented with an auditory word and were required to identify the matching image from a selection of four options: the target word, two unrelated words, and a minimal pair competitor contrasting with the target word in word-final position in one of four categories (manner, place, voicing, nasality).</p><p><strong>Results: </strong>We measured fixations to minimal pair competitors over time and found that manner and place competitors exhibited greater competition compared to voicing and nasality competitors. Notably, within manner competitors, substantial differences in discrimination difficulty were observed among individual contrasts.</p><p><strong>Conclusions: </strong>Conventional views of speech sound perception have often grouped sounds into broad categories (manner, place, voicing, nasality), potentially overlooking the nuanced differences within these groupings, which significantly affect perception. This work is vital for advancing our understanding of speech perception and its mechanisms. Furthermore, this work will help to refine minimal pair treatment strategies in clinical contexts.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28848446.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2945-2958"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143994067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Fundamental Frequency and Vocal Tract Resonance on Sentence Recognition in Noise.","authors":"Jing Yang, Xianhui Wang, Victoria Costa, Li Xu","doi":"10.1044/2025_JSLHR-24-00758","DOIUrl":"10.1044/2025_JSLHR-24-00758","url":null,"abstract":"<p><strong>Purpose: </strong>This study examined the effects of change in a talker's sex-related acoustic properties (fundamental frequency [<i>F</i>0] and vocal tract resonance [VTR]) on speech recognition in noise.</p><p><strong>Method: </strong>The stimuli were Hearing in Noise Test sentences, with the <i>F</i>0 and VTR of the original male talker manipulated into four conditions: low <i>F</i>0 and low VTR (L<sub><i>F</i>0</sub>L<sub>VTR</sub>; i.e., the original recordings), low <i>F</i>0 and high VTR (L<sub><i>F</i>0</sub>H<sub>VTR</sub>), high <i>F</i>0 and high VTR (H<sub><i>F</i>0</sub>H<sub>VTR</sub>), and high <i>F</i>0 and low VTR (H<sub><i>F</i>0</sub>L<sub>VTR</sub>). The listeners were 42 English-speaking, normal-hearing adults (21-31 years old). The sentences mixed with speech spectrum-shaped noise at various signal-to-noise ratios (i.e., -10, -5, 0, and +5 dB) were presented to the listeners for recognition.</p><p><strong>Results: </strong>The results revealed no significant differences between the H<sub><i>F</i>0</sub>H<sub>VTR</sub> and L<sub><i>F</i>0</sub>L<sub>VTR</sub> conditions in sentence recognition performance and the estimated speech reception thresholds (SRTs). However, in the H<sub><i>F</i>0</sub>L<sub>VTR</sub> and L<sub><i>F</i>0</sub>H<sub>VTR</sub> conditions, the recognition performance was reduced, and the listeners showed significantly higher SRTs relative to those in the H<sub><i>F</i>0</sub>H<sub>VTR</sub> and L<sub><i>F</i>0</sub>L<sub>VTR</sub> conditions.</p><p><strong>Conclusion: </strong>These findings indicate that male and female voices with matched <i>F</i>0 and VTR (e.g., L<sub><i>F</i>0</sub>L<sub>VTR</sub> and H<sub><i>F</i>0</sub>H<sub>VTR</sub>) yield equivalent speech recognition in noise, whereas voices with mismatched <i>F</i>0 and VTR may reduce intelligibility in noisy environments.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29052305.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"3011-3022"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144103013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Significance of a Higher Prevalence of ADHD and ADHD Symptoms in Children Who Stutter.","authors":"Bridget Walsh, Seth E Tichenor, Katelyn L Gerwin","doi":"10.1044/2025_JSLHR-24-00668","DOIUrl":"10.1044/2025_JSLHR-24-00668","url":null,"abstract":"<p><strong>Purpose: </strong>Research suggests that attention-deficit/hyperactivity disorder (ADHD) and its symptoms occur more frequently in individuals who stutter. The purpose of this study was to document the prevalence of ADHD diagnoses and ADHD symptoms in children who stutter and examine potential relationships between ADHD and stuttering characteristics.</p><p><strong>Method: </strong>A total of 204 children between the ages of 5 and 18 years (<i>M</i> = 9.9 years; <i>SD</i> = 3.5 years) and their parents participated in the study. Parents completed the ADHD Rating Scale (ADHD-RS) indexing Inattention and Hyperactivity-Impulsivity symptoms, and children completed the age-appropriate version of the Overall Assessment of the Speaker's Experience of Stuttering assessing the adverse impact of stuttering. Chi-square proportions and Mann-Whitney <i>U</i> tests were used to assess differences in demographic and other variables of interest between children with and without an ADHD diagnosis. Multiple linear regression was used to assess relationships between ADHD symptoms and stuttering characteristics.</p><p><strong>Results: </strong>Parents reported that 17.2% of children who stutter in our sample had been diagnosed with ADHD. Over 40% of children without an ADHD diagnosis had ADHD-RS scores that met the criteria for further evaluation. No significant relationship between ADHD symptoms and stuttering severity was found, but child age and inattention scores significantly, albeit modestly, predicted the adverse impact of stuttering.</p><p><strong>Conclusions: </strong>Researchers and clinicians might be privy to a child's ADHD diagnosis, but they should recognize that many children who stutter without an ADHD diagnosis may exhibit elevated symptoms of inattention and hyperactivity-impulsivity. These symptoms can complicate both research outcomes and the treatment of stuttering.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28899620.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2741-2758"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12173216/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}