{"title":"Exploring Methodological Decisions for Calculating the Minimally Detectable Change in Dysarthria: Reliability, Statistics, and Standard Error of Measurement.","authors":"Kelly E Gates, Antje S Mefferd, Kaila L Stipancic","doi":"10.1044/2025_JSLHR-24-00899","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00899","url":null,"abstract":"<p><strong>Purpose: </strong>The minimally detectable change (MDC), widely used in rehabilitation sciences to interpret changes in outcome measures, is calculated using a reliability method, reliability statistic, and standard error of measurement (<i>SEM</i>). This study examined how different methodological choices affect MDC thresholds of speech intelligibility in speakers with dysarthria. The goals of this study were to compare MDCs calculated using (a) three different reliability methods, (b) two different reliability statistics, and (c) three different <i>SEM</i> calculations.</p><p><strong>Method: </strong>Recordings of the Speech Intelligibility Test from 200 speakers including speakers with amyotrophic lateral sclerosis (<i>n</i> = 16), Huntington's disease (<i>n</i> = 44), multiple sclerosis (<i>n</i> = 60), and Parkinson's disease (<i>n</i> = 40), along with healthy controls (<i>n</i> = 40), were drawn from two databases. Thirty inexperienced listeners completed two sessions, providing orthographic transcriptions of 20 speakers. MDCs of intelligibility were calculated using (a) three reliability methods (i.e., test-retest, split-half, and intrarater), (b) two reliability statistics (i.e., Pearson <i>r</i> and intraclass correlation coefficients [ICCs]), and (c) three different formulas for calculating the <i>SEM</i>. Kruskal-Wallis tests were used to assess the effects of reliability methods, statistics, and <i>SEM</i> calculations.</p><p><strong>Results: </strong>Significant differences were found between the MDCs when using split-half and test-retest reliability, when using Pearson <i>r</i> and ICC, and when using two of the three <i>SEM</i> calculations.</p><p><strong>Conclusions: </strong>Results demonstrate that methodological decisions can impact MDCs of speech intelligibility in speakers with dysarthria, highlighting the need for specific, detailed reporting of methodology used to calculate MDCs in future work. Findings can provide methodological guidance for future studies and contextualize existing research on intelligibility changes.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-18"},"PeriodicalIF":0.0,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144683960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spoken Language Dual-Task Effects in Typical Aging: A Systematic Review.","authors":"Christos Salis, Laura L Murray, Rawand Jarrar","doi":"10.1044/2025_JSLHR-24-00826","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00826","url":null,"abstract":"<p><strong>Purpose: </strong>Many studies have shown that several spoken language production skills are negatively affected by the typical aging process. In contrast, how language is affected when older adults are asked to speak under conditions of distraction using dual- or multitask paradigms has received less empirical attention, even though such conditions align with the demands of everyday communication contexts. Accordingly, the objectives in this original systematic review were to synthesize and appraise literature on spoken language production in neurotypical older adults when they talk under conditions of distraction. To our knowledge, this is the first systematic review that focuses on this topic.</p><p><strong>Method: </strong>Five databases (EMBASE, LLBA, Medline, PsycINFO, Web of Science Core Collection) were searched (from databases' inception to January 2024) for eligible studies using comprehensive search terms. All steps from screening of records, selection of studies, data extraction, and critical appraisal were carried out by two reviewers who worked independently.</p><p><strong>Results: </strong>Thirteen studies culminated in the qualitative evidence synthesis. Critical appraisal was carried out and showed that the current evidence base is overall weak.</p><p><strong>Conclusions: </strong>The findings were mixed as to whether dual-task costs (i.e., worse performance in single-task, talking only) are evident in aging. However, speech fluency in discourse appears to be more vulnerable under conditions of distraction in older than younger adults. Across all included studies, significant methodological shortcomings were present. Whereas this literature points to some age-related changes when speaking in more challenging, dual-task contexts, further research is clearly needed on topics such as the types of dual-task contexts that reveal age-related language changes, the role of instructions on task prioritization, and the role of influential participant variables (e.g., cardiovascular risk factors) on dual-task language performance in older adults.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29525795.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144683962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Audiovisual Integration in Cued Speech Perception: Impact on Speech Recognition in Quiet and Noise Among Adults With Hearing Loss and Those With Typical Hearing.","authors":"Cora Jirschik Caron, Coriandre Vilain, Jean-Luc Schwartz, Jacqueline Leybaert, Cécile Colin","doi":"10.1044/2025_JSLHR-24-00334","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00334","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to investigate audiovisual (AV) integration of cued speech (CS) gestures with the auditory input presented in quiet and amidst noise while controlling for visual speech decoding. Additionally, the study considered participants' auditory status and auditory abilities as well as their abilities to produce and decode CS in speech perception.</p><p><strong>Method: </strong>Thirty-one adults with hearing loss (HL) and proficient in CS decoding participated, alongside 52 adults with typical hearing (TH), consisting of 14 CS interpreters and 38 individuals naive regarding the system. The study employed a speech recognition test that presented CS gestures, lipreading, and lipreading integrated with CS gestures, either without sound or combined with speech sounds in quiet or amidst noise.</p><p><strong>Results: </strong>Participants with HL and lower auditory abilities integrated the auditory input with CS gestures and increased their recognition scores by 44% in quiet conditions of speech recognition. For participants with HL and higher auditory abilities, integrating CS gestures with the auditory input mixed with noise increased recognition scores by 43.1% over the auditory-only condition. For all participants with HL, CS integrated with lipreading produced optimal recognition regardless of their auditory abilities, while for those with TH, adding CS gestures did not enhance lipreading, and AV benefits were observed only when lipreading was integrated with the auditory input presented amidst noise.</p><p><strong>Conclusions: </strong>Individuals with HL are able to integrate CS gestures with auditory input. Visually supporting auditory speech with CS gestures improves speech recognition in noise and also in quiet conditions of communication for participants with HL and low auditory abilities.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-19"},"PeriodicalIF":0.0,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144683959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William G Kronenberger, Irina Castellanos, Jessica Beer, David B Pisoni
{"title":"Externalizing Behaviors in Preschool-Aged Children With Cochlear Implants.","authors":"William G Kronenberger, Irina Castellanos, Jessica Beer, David B Pisoni","doi":"10.1044/2025_JSLHR-25-00005","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-25-00005","url":null,"abstract":"<p><strong>Purpose: </strong>Deaf children with cochlear implants (CIs) experience challenges in early development of hearing and language skills that may place them at risk for externalizing behavior problems, such as aggression, hyperactivity-impulsivity, and oppositional behavior. This study is a longitudinal investigation of (a) between-groups differences in externalizing behavior problems between preschool-aged children with CIs and normal-hearing (NH) peers, and (b) within-group factors that may explain variability in externalizing problems within the sample of CI users.</p><p><strong>Method: </strong>Parents of 26 children with CIs and 30 NH peers completed externalizing behavior checklists at two visits separated by 1 year, starting at ages 3 or 4 years. Demographic/hearing history variables, language (vocabulary), nonverbal intelligence, and coping flexibility were assessed for concurrent and predictive associations with externalizing problems within the CI sample.</p><p><strong>Results: </strong>Results showed significantly greater externalizing behavior problems in CI users compared to NH peers at Time 1, although these differences were less pronounced 1 year later. Poorer residual hearing and better coping flexibility at Time 1 were associated with fewer externalizing behavior problems in CI users at Time 2. CI users who showed improvement in coping flexibility over the 1-year period also showed improvement in externalizing behaviors during that period. Nonverbal intelligence and language were not associated with externalizing behavior problems.</p><p><strong>Conclusions: </strong>Preschool-aged CI users may be at greater risk than NH peers for the early development of externalizing behavior problems. Improved coping flexibility may offer the potential for improvement in externalizing behavior problems for young CI users.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144683961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donguk Lee, James D Lewis, Ashley Harkrider, Mark Hedrick
{"title":"Effects of Contralateral Noise on Cortical Auditory Evoked Potential Latencies and Amplitudes.","authors":"Donguk Lee, James D Lewis, Ashley Harkrider, Mark Hedrick","doi":"10.1044/2025_JSLHR-24-00698","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00698","url":null,"abstract":"<p><strong>Purpose: </strong>There is evidence from past animal work that the neural signal-to-noise ratio (SNR) is modulated through the action of the medial olivocochlear reflex (MOCR). This is commonly referred to as unmasking. However, evidence of unmasking in humans is limited, perhaps due to the traditional approach of measuring the MOCR using otoacoustic emissions-a preneural metric. The amplitudes and latencies of the late latency response (LLR) are sensitive to changes in SNR and may provide a means to noninvasively evaluate MOCR unmasking at the neural level. The purpose of this study was to investigate MOCR-mediated enhancement of ipsilateral noise in humans using the LLR.</p><p><strong>Method: </strong>Fifty normal-hearing adults were recruited. The LLR was measured for a 60 dB SPL, 1-kHz tone in both ipsilateral quiet and ipsilateral noise, with and without presentation of contralateral noise. For the ipsilateral noise conditions, the noise was presented at three different levels to achieve SNRs of +5 dB, +15 dB, and +25 dB. The contralateral noise was always 60 dB SPL white noise. LLR latencies (P1, N1, and P2) and interpeak amplitudes (P1-N1 and N1-P2) were measured for all conditions. In addition, otoacoustic emissions (OAEs) for a 1-kHz tone burst were measured in ipsilateral quiet both with and without contralateral noise. The same contralateral noise was used for both OAEs and LLRs.</p><p><strong>Results: </strong>For the ipsilateral noise conditions, SNR had a significant effect on LLR latencies and interpeak amplitudes: Latencies decreased, and amplitudes increased as SNR improved. The presentation of contralateral noise had a significant effect on P1 and N1 latencies, both of which decreased. LLR interpeak amplitudes significantly increased upon the presentation of contralateral noise. For the ipsilateral quiet condition, there were no significant effects of contralateral noise on LLR metrics. Though OAE magnitudes were significantly reduced upon presentation of contralateral noise, consistent significant relationships between OAE magnitude changes and changes in the LLR metrics were not found.</p><p><strong>Conclusion: </strong>Findings suggest that the presentation of contralateral noise enhances the neural response to an ipsilateral noise, potentially through MOC efferent feedback.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29441903.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Initial Evaluation of a New Auditory Attention Task for Assessing Alerting, Orienting, and Executive Control Attention.","authors":"Arianna N LaCroix, Emily Sebranek","doi":"10.1044/2025_JSLHR-24-00513","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00513","url":null,"abstract":"<p><strong>Purpose: </strong>Attention is a key cognitive function crucial for selecting and processing information. It is often divided into three components: alerting, orienting, and executive control. While there are tasks designed to simultaneously assess the attentional subsystems in the visual modality, creating an effective auditory task has been challenging, especially for clinical populations. This study aimed to explore whether a new Auditory Attention Task (AAT) measures all three attentional subsystems in neurotypical controls.</p><p><strong>Method: </strong>Forty-eight young adults completed the AAT, where they judged the duration of the first of two tones while ignoring the second tone's duration. Executive control was assessed by comparing performance on trials with conflict (incongruent) and without conflict (congruent). The tones could also differ on frequency and performance differences between trials with same versus different frequencies measured orienting attention. A warning cue was presented before the first pure tone on half of the trials. Alerting attention was measured by comparing performance on trials with and without a warning cue.</p><p><strong>Results: </strong>The AAT measured alerting, orienting, and executive control attention as expected. Participants were faster on warned than nonwarned trials (alerting) and on same- versus different-frequency trials (orienting). Participants were also faster and more accurate on same- versus different-duration trials (executive control). We also observed several interactions between the attentional subsystems.</p><p><strong>Conclusions: </strong>Our results demonstrate that the AAT measured alerting, orienting, and executive control attention. However, additional work is needed to explore the AAT's utility in clinical populations, such as people with aphasia.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29525717.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-12"},"PeriodicalIF":0.0,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acoustic Correlates of Timing Typicality in Speakers With Parkinson's Disease.","authors":"Saul A Frankford, Cara E Stepp","doi":"10.1044/2025_JSLHR-24-00712","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00712","url":null,"abstract":"<p><strong>Purpose: </strong>The present study aimed to determine acoustic metrics that can approximate listener perception of the typicality of speech timing in individuals with Parkinson's disease (PD). It was hypothesized that the perception of timing typicality would correlate with measures based on the deviation from speech produced by individuals with typical speech (in speaking duration, in pause time and other disfluencies, and at the word level).</p><p><strong>Method: </strong>Twenty speakers with PD and 40 typical speakers matched in age and sex were recorded reading a standard passage. Acoustic timing measures were calculated for the speakers with PD, both absolute and relative to recordings from the typical speakers. Linear regression models were used to estimate the relationship strength between each acoustic measure and the perception of timing typicality. Models containing all variables and subsets of variables were compared to test study hypotheses.</p><p><strong>Results: </strong>A model consisting of mean word duration and mean interword duration, both in their absolute values and relative to control speakers, explained substantial variance in perceptual judgments of timing typicality in speakers with PD (<i>R</i><sup>2</sup> = .93).</p><p><strong>Conclusions: </strong>Timing measures based on the deviation from normative values and accounting for pausing and disfluencies may provide an acoustic basis for estimating timing typicality in people with PD. Future work should examine these acoustic metrics in a larger sample to determine their utility in research and clinical settings.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jenna Griffin-Musick, Catherine Off, Victoria Scharp, Danielle Fahey, Laurie Slovarp, John Quindry
{"title":"Comparing Patient Outcomes in Aphasia Rehabilitation: Intensive Comprehensive, Modified Intensive Comprehensive, and Usual Care Models.","authors":"Jenna Griffin-Musick, Catherine Off, Victoria Scharp, Danielle Fahey, Laurie Slovarp, John Quindry","doi":"10.1044/2025_JSLHR-24-00806","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00806","url":null,"abstract":"<p><strong>Purpose: </strong>Aphasia negatively impacts functional communication, communicative participation, and psychosocial well-being in stroke survivors, requiring novel models of rehabilitation that are person centered and holistic. This study aimed to evaluate the feasibility and preliminary efficacy of three service delivery models: intensive comprehensive aphasia program (ICAP), modified intensive comprehensive aphasia program (mICAP), and usual care (UC).</p><p><strong>Method: </strong>This Phase I quasirandomized study investigated three models of service delivery for stroke survivors with post-acute aphasia: a 4-week, 84-hr ICAP; a 2-week, 24-hr mICAP; and an 8-week, 24-hr UC condition. A sample of 18 participants was recruited and quasirandomly assigned to one of the three conditions (ICAP: <i>n</i> = 8, mICAP: <i>n</i> = 6, UC: <i>n</i> = 4). Outcome measures assessed the constructs of language, functional communication, psychosocial well-being, and quality of life through individual, within-group, and between-group comparisons.</p><p><strong>Results: </strong>Overall, participants in the ICAP and mICAP groups demonstrated greater positive changes across multiple outcome measures compared to those in the UC condition. All 18 participants completed their respective programs with no attrition, with adherence rates highest in the ICAP group, followed by the mICAP and then UC.</p><p><strong>Conclusions: </strong>This Phase I pilot study provides initial feasibility and efficacy data directly comparing ICAP, mICAP, and UC service delivery models. Findings support the continued exploration of ICAP and mICAP models to address the diverse needs of individuals with aphasia.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-25"},"PeriodicalIF":0.0,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lotte Van den Eynde, Pieter De Clercq, Ellen Rombouts, Maaike Vandermosten, Inge Zink
{"title":"Identification of Developmental Language Disorder in Bilingual Children: An Accurate and Time-Efficient Combination of Language Measurements.","authors":"Lotte Van den Eynde, Pieter De Clercq, Ellen Rombouts, Maaike Vandermosten, Inge Zink","doi":"10.1044/2025_JSLHR-24-00541","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00541","url":null,"abstract":"<p><strong>Purpose: </strong>This study addresses the challenge of identifying developmental language disorder (DLD) in bilingual children. Despite the broad range of language measurements documented in the literature, their individual contribution to a DLD diagnosis remains unclear. Administrating a high number of tests will yield a holistic child view, but it needs to be reconciled with a time-efficient protocol that is feasible in clinical practice. Therefore, we aim to evaluate the accuracy and time efficiency of a comprehensive set of measurements, through cross-validated machine learning.</p><p><strong>Method: </strong>In 50 typically developing bilingual children and 50 bilingual children with DLD aged between 5 and 9 years, background measurements were assessed including hearing, intelligence, language experiences, and socioeconomic status. Alongside standardized language tests, a parental questionnaire on home language, narrative tasks, a nonword repetition task, and a cognitive inhibition task were administered. Both group differences and individual performance were studied.</p><p><strong>Results: </strong>Significant group differences were observed across most measurements. The most accurate and time-efficient protocol combined four measurements, including sentence repetition, nonword repetition, the parental questionnaire, and the task measuring semantic and morphosyntactic comprehension, achieving 90% classification accuracy. Notably, adding more measurements to the protocol did not enhance accuracy.</p><p><strong>Conclusions: </strong>This data-driven analysis selected the measurements that are most contributive in identifying DLD in bilingual children. This language assessment protocol successfully combines time efficiency with high accuracy to diagnose DLD, resulting in a useful and feasible protocol for speech-language pathologists in clinical practice.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29522192.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-20"},"PeriodicalIF":0.0,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Explanatory Model of Speech Communication Centered on Multiscale Rhythmic Modulation: Implications for Motor Speech Assessment and Intervention for Individuals With Amyotrophic Lateral Sclerosis.","authors":"Panying Rong, Erin Liston","doi":"10.1044/2025_JSLHR-24-00286","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00286","url":null,"abstract":"<p><strong>Purpose: </strong>This study proposed an explanatory model of speech communication centered on multiscale rhythmic modulation to inform motor speech assessment and management. To these ends, a fit-for-purpose, automated measurement tool was used to evaluate and/or cross-validate (a) the previously reported effect of a neuromotor disorder-amyotrophic lateral sclerosis (ALS)-and (b) the effects of two cueing strategies, commonly used in managing motor speech disorders, on rhythmic modulation of speech.</p><p><strong>Method: </strong>A secondary analysis was carried out on the X-ray Microbeam database. The analyzed data included the articulatory-kinematic and acoustic recordings of a phonetically loaded sentence produced by 19 individuals with ALS and 23 neurologically healthy controls in one habitual style and two nonhabitual styles as elicited by the slow and clear speech cues, respectively. The measurement tool quantified the modulation patterns of four articulators as well as four critical-band and one wide-band envelopes at three linguistically relevant timescales (delta, theta, beta/gamma) to assess rhythm control at the prosodic, syllabic, and subsyllabic levels. To address the research aims, the disease and speaking style effects on all modulation metrics were evaluated.</p><p><strong>Results: </strong>For Aim 1, speakers with ALS showed reduced modulation depth of multiple articulators and critical-band envelopes at all timescales. For Aim 2, the slow speech cue elicited changes in articulatory modulation at multiple timescales, globally enhancing the control of all and especially syllabic and subsyllabic rhythms in speakers with ALS. Clear speech primarily elicited changes in articulatory modulation at the theta timescale, generating a more restricted effect on syllabic rhythm.</p><p><strong>Conclusions: </strong>The findings generally aligned with our prior research, supporting the robust utility of the measurement tool for assessing rhythmic disturbances of speakers with ALS. Moreover, this tool showed promise for delineating cueing-elicited changes in rhythmic modulation of speech, which has potential implications in tailoring and evaluating the outcomes of behavioral intervention.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-25"},"PeriodicalIF":0.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144652063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}