{"title":"The Collaboverse: A Collaborative Data-Sharing and Speech Analysis Platform.","authors":"Justin D Dvorak, Frank R Boutsen","doi":"10.1044/2024_JSLHR-23-00286","DOIUrl":"10.1044/2024_JSLHR-23-00286","url":null,"abstract":"<p><strong>Purpose: </strong>Collaboration in the field of speech-language pathology occurs across a variety of digital devices and can entail the usage of multiple software tools, systems, file formats, and even programming languages. Unfortunately, gaps between the laboratory, clinic, and classroom can emerge in part because of siloing of data and workflows, as well as the digital divide between users. The purpose of this tutorial is to present the Collaboverse, a web-based collaborative system that unifies these domains, and describe the application of this tool to common tasks in speech-language pathology. In addition, we demonstrate its utility in machine learning (ML) applications.</p><p><strong>Method: </strong>This tutorial outlines key concepts in the digital divide, data management, distributed computing, and ML. It introduces the Collaboverse workspace for researchers, clinicians, and educators in speech-language pathology who wish to improve their collaborative network and leverage advanced computation abilities. It also details an ML approach to prosodic analysis.</p><p><strong>Conclusions: </strong>The Collaboverse shows promise in narrowing the digital divide and is capable of generating clinically relevant data, specifically in the area of prosody, whose computational complexity has limited widespread analysis in research and clinic alike. In addition, it includes an augmentative and alternative communication app allowing visual, nontextual communication.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4137-4156"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Horton, Victoria Jackson, Jessica Boyce, Marie-Christine Franken, Stephanie Siemers, Miya St John, Stephen Hearps, Olivia van Reyk, Ruth Braden, Richard Parker, Adam P Vogel, Else Eising, David J Amor, Janelle Irvine, Simon E Fisher, Nicholas G Martin, Sheena Reilly, Melanie Bahlo, Ingrid Scheffer, Angela Morgan
{"title":"Self-Reported Stuttering Severity Is Accurate: Informing Methods for Large-Scale Data Collection in Stuttering.","authors":"Sarah Horton, Victoria Jackson, Jessica Boyce, Marie-Christine Franken, Stephanie Siemers, Miya St John, Stephen Hearps, Olivia van Reyk, Ruth Braden, Richard Parker, Adam P Vogel, Else Eising, David J Amor, Janelle Irvine, Simon E Fisher, Nicholas G Martin, Sheena Reilly, Melanie Bahlo, Ingrid Scheffer, Angela Morgan","doi":"10.1044/2023_JSLHR-23-00081","DOIUrl":"10.1044/2023_JSLHR-23-00081","url":null,"abstract":"<p><strong>Purpose: </strong>To our knowledge, there are no data examining the agreement between self-reported and clinician-rated stuttering severity. In the era of big data, self-reported ratings have great potential utility for large-scale data collection, where cost and time preclude in-depth assessment by a clinician. Equally, there is increasing emphasis on the need to recognize an individual's experience of their own condition. Here, we examined the agreement between self-reported stuttering severity compared to clinician ratings during a speech assessment. As a secondary objective, we determined whether self-reported stuttering severity correlated with an individual's subjective impact of stuttering.</p><p><strong>Method: </strong>Speech-language pathologists conducted face-to-face speech assessments with 195 participants (137 males) aged 5-84 years, recruited from a cohort of people with self-reported stuttering. Stuttering severity was rated on a 10-point scale by the participant and by two speech-language pathologists. Participants also completed the Overall Assessment of the Subjective Experience of Stuttering (OASES). Clinician and participant ratings were compared. The association between stuttering severity and the OASES scores was examined.</p><p><strong>Results: </strong>There was a strong positive correlation between speech-language pathologist and participant-reported ratings of stuttering severity. Participant-reported stuttering severity correlated weakly with the four OASES domains and with the OASES overall impact score.</p><p><strong>Conclusions: </strong>Participants were able to accurately rate their stuttering severity during a speech assessment using a simple one-item question. This finding indicates that self-report stuttering severity is a suitable method for large-scale data collection. Findings also support the collection of self-report subjective experience data using questionnaires, such as the OASES, which add vital information about the participants' experience of stuttering that is not captured by overt speech severity ratings alone.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4015-4024"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138489077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward Process-Oriented, Dimensional Approaches for Diagnosis and Treatment of Speech Sound Disorders in Children: Position Statement and Future Perspectives.","authors":"Ben Maassen, Hayo Terband","doi":"10.1044/2024_JSLHR-23-00591","DOIUrl":"10.1044/2024_JSLHR-23-00591","url":null,"abstract":"<p><strong>Background: </strong>Children with speech sound disorders (SSD) form a heterogeneous group, with respect to severity, etiology, proximal causes, speech error characteristics, and response to treatment. Infants develop speech and language in interaction with neurological maturation and general perceptual, motoric, and cognitive skills in a social-emotional context.</p><p><strong>Purpose: </strong>After a brief introduction into psycholinguistic models of speech production and levels of causation, in this review article, we present an in-depth overview of mechanisms and processes, and the dynamics thereof, which are crucial in typical speech development. These basic mechanisms and processes are: (a) neurophysiological motor refinement, that is, the maturational articulatory mechanisms that drive babbling and the more differentiated production of larger speech patterns; (b) sensorimotor integration, which forms the steering function from phonetics to phonology; and (c) motor hierarchy and articulatory phonology describing the gestural organization of syllables, which underlie fluent speech production. These dynamics have consequences for the diagnosis and further analysis of SSD in children. We argue that current diagnostic classification systems do not do justice to the multilevel, multifactorial, and interactive character of the underlying mechanisms and processes. This is illustrated by a recent Dutch study yielding distinct performance profiles among children with SSD, which allows for a dimensional interpretation of underlying processing deficits.</p><p><strong>Conclusions: </strong>Analyses of mainstream treatments with respect to the treatment goals and the speech mechanisms addressed show that treatment programs are quite transparent in their aims and approach and how they contribute to remediating specific deficits or mechanisms. Recent studies into clinical reasoning reveal that the clinical challenge for speech-language pathologists is how to select the most appropriate treatment at the most appropriate time for each individual child with SSD. We argue that a process-oriented approach has merits as compared to categorical diagnostics as a toolbox to aid in the interpretation of the speech profile in terms of underlying deficits and to connect these to a specific intervention approach and treatment target.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4115-4136"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Huynh, Kerry Adams, Carolina Barnett-Tapia, Sanjay Kalra, Lorne Zinman, Yana Yunusova
{"title":"Accessing and Receiving Speech-Language Pathology Services at the Multidisciplinary Amyotrophic Lateral Sclerosis Clinic: An Exploratory Qualitative Study of Patient Experiences and Needs.","authors":"Anna Huynh, Kerry Adams, Carolina Barnett-Tapia, Sanjay Kalra, Lorne Zinman, Yana Yunusova","doi":"10.1044/2023_JSLHR-23-00087","DOIUrl":"10.1044/2023_JSLHR-23-00087","url":null,"abstract":"<p><strong>Purpose: </strong>This study sought to explore how patients with amyotrophic lateral sclerosis (ALS) presenting with coexisting bulbar and cognitive impairments and their caregivers experienced the speech-language pathologist (SLP) services provided in multidisciplinary ALS clinics in Canada and identified their perceived needs for bulbar symptom management.</p><p><strong>Method: </strong>This qualitative study was informed by interpretive description. Seven interviews were conducted with patients with severe bulbar dysfunction or severe bulbar and cognitive dysfunction due to ALS or ALS-frontotemporal dementia, respectively, and/or their caregivers. Purposive sampling was used to recruit individuals with severe bulbar or bulbar and cognitive disease. Thematic analysis was used to analyze interview data.</p><p><strong>Results: </strong>Patients and caregivers reported difficulties with accessing and receiving SLP services at the multidisciplinary ALS clinic. These difficulties were further exacerbated in those with severe cognitive disease. Participants expressed a need for more specific (i.e., disease and service-related) information and personalized care to address their changing needs and preferences. Engaging caregivers earlier in SLP appointments was perceived as vital to support care planning and provide in-time caregiver education.</p><p><strong>Conclusions: </strong>This study highlighted the challenges experienced by patients and caregivers in accessing and receiving SLP services. There is a pressing need for a more person-centered approach to ALS care and a continuing need for education of SLPs on care provision in cases of complex multisymptom diseases within a multidisciplinary ALS clinic.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.24069222.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4025-4037"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11547048/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10184562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raphael Werner, Susanne Fuchs, Jürgen Trouvain, Steffen Kürbis, Bernd Möbius, Peter Birkholz
{"title":"Acoustics of Breath Noises in Human Speech: Descriptive and Three-Dimensional Modeling Approaches.","authors":"Raphael Werner, Susanne Fuchs, Jürgen Trouvain, Steffen Kürbis, Bernd Möbius, Peter Birkholz","doi":"10.1044/2023_JSLHR-23-00112","DOIUrl":"10.1044/2023_JSLHR-23-00112","url":null,"abstract":"<p><strong>Purpose: </strong>Breathing is ubiquitous in speech production, crucial for structuring speech, and a potential diagnostic indicator for respiratory diseases. However, the acoustic characteristics of speech breathing remain underresearched. This work aims to characterize the spectral properties of human inhalation noises in a large speaker sample and explore their potential similarities with speech sounds. Speech sounds are mostly realized with egressive airflow. To account for this, we investigated the effect of airflow direction (inhalation vs. exhalation) on acoustic properties of certain vocal tract (VT) configurations.</p><p><strong>Method: </strong>To characterize human inhalation, we describe spectra of breath noises produced by human speakers from two data sets comprising 34 female and 100 male participants. To investigate the effect of airflow direction, three-dimensional-printed VT models of a male and a female speaker with static VT configurations of four vowels and four fricatives were used. An airstream was directed through these VT configurations in both directions, and their spectral consequences were analyzed.</p><p><strong>Results: </strong>For human inhalations, we found spectra with a decreasing slope and several weak peaks below 3 kHz. These peaks show moderate (female) to strong (male) overlap with resonances found for participants inhaling with a VT configuration of a central vowel. Results for the VT models suggest that airflow direction is crucial for spectral properties of sibilants, /ç/, and /i:/, but not the other sounds we investigated. Inhalation noise is most similar to /ə/ where airflow direction does not play a role.</p><p><strong>Conclusions: </strong>Inhalation is realized on ingressive airflow, and inhalation noises have specific resonance properties that are most similar to /ə/ but occur without phonation. Airflow direction does not play a role in this specific VT configuration, but subglottal resonances may do. For future work, we suggest investigating the articulation of speech breathing and link it to current work on pause postures.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.24520585.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"3947-3961"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136400235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Relation Between Leg Motion Rate and Speech Tempo During Submaximal Cycling Exercise.","authors":"Heather Weston, Wim Pouw, Susanne Fuchs","doi":"10.1044/2023_JSLHR-23-00178","DOIUrl":"10.1044/2023_JSLHR-23-00178","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated whether temporal coupling was present between lower limb motion rate and different speech tempi during different exercise intensities. We hypothesized that increased physical workload would increase cycling rate and that this could account for previous findings of increased speech tempo during exercise. We also investigated whether the choice of speech task (read vs. spontaneous speech) affected results.</p><p><strong>Method: </strong>Forty-eight women who were ages 18-35 years participated. A within-participant design was used with fixed-order physical workload and counterbalanced speech task conditions. Motion capture and acoustic data were collected during exercise and at rest. Speech tempo was assessed using the amplitude envelope and two derived intrinsic mode functions that approximated syllable-like and footlike oscillations in the speech signal. Analyses were conducted with linear mixed-effects models.</p><p><strong>Results: </strong>No direct entrainment between leg cycling rate and speech rate was observed. Leg cycling rate significantly increased from low to moderate workload for both speech tasks. All measures of speech tempo decreased when participants changed from rest to either low or moderate workload.</p><p><strong>Conclusions: </strong>Speech tempo does not show temporal coupling with the rate of self-generated leg motion at group level, which highlights the need to investigate potential faster scale momentary coupling. The unexpected finding that speech tempo decreases with increased physical workload may be explained by multiple mental and physical factors that are more diverse and individual than anticipated. The implication for real-world contexts is that even light physical activity-functionally equivalent to walking-may impact speech tempo.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"3931-3946"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Breathing and Speech Adaptation: Do Speakers Adapt Toward a Confederate Talking Under Physical Effort?","authors":"Tom Offrede, Christine Mooshammer, Susanne Fuchs","doi":"10.1044/2023_JSLHR-23-00113","DOIUrl":"10.1044/2023_JSLHR-23-00113","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated whether speakers adapt their breathing and speech (fundamental frequency [<i>f</i><sub>o</sub>]) to a prerecorded confederate who is sitting or moving under different levels of physical effort and who is either speaking or not. Following Paccalin and Jeannerod (2000), we would expect breathing rate to change in the direction of the confederate's, even if the participant is physically inactive. This might in turn affect their speech acoustics.</p><p><strong>Method: </strong>We recorded the speech and respiration of 22 native German speakers. They produced solo and synchronous read speech in interaction with a confederate who appeared on a prerecorded video. There were three within-subject experimental conditions: the confederate (a) sitting, (b) biking with light effort, or (c) biking with heavier effort.</p><p><strong>Results: </strong>During speech, the confederate's inhalation amplitude and <i>f</i><sub>o</sub> increased with physical effort, as expected. Her breath cycle duration changed differently, probably because of read speech constraints. Overall, the only adaptation the participants showed was higher <i>f</i><sub>o</sub> with increase in the confederate's physical effort during synchronous, but not solo, speech. Additionally, they produced shallower inhalations when observing the confederate biking in silence, as compared to the condition without movement. Crucially, the participants' acoustic and breathing data showed large interindividual variability.</p><p><strong>Conclusions: </strong>Our findings indicate that, in this paradigm, convergence only took place on <i>f</i><sub>o</sub> during synchronous speech and that this phonetic adaptation happened independently from any speech breathing adaptation. It also suggests that participants may adapt their quiet breathing while watching a person performing physical exercise but that the mechanism is more complex than that explained previously.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"3914-3930"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139503180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Machart, Anne Vilain, Hélène Lœvenbruck, Mark Tiede, Lucie Ménard
{"title":"Exposure to Canadian French Cued Speech Improves Consonant Articulation in Children With Cochlear Implants: Acoustic and Articulatory Data.","authors":"Laura Machart, Anne Vilain, Hélène Lœvenbruck, Mark Tiede, Lucie Ménard","doi":"10.1044/2023_JSLHR-23-00078","DOIUrl":"10.1044/2023_JSLHR-23-00078","url":null,"abstract":"<p><strong>Purpose: </strong>One of the strategies that can be used to support speech communication in deaf children is cued speech, a visual code in which manual gestures are used as additional phonological information to supplement the acoustic and labial speech information. Cued speech has been shown to improve speech perception and phonological skills. This exploratory study aims to assess whether and how cued speech reading proficiency may also have a beneficial effect on the acoustic and articulatory correlates of consonant production in children.</p><p><strong>Method: </strong>Eight children with cochlear implants (from 5 to 11 years of age) and with different receptive proficiency in Canadian French Cued Speech (three children with low receptive proficiency vs. five children with high receptive proficiency) are compared to 10 children with typical hearing (from 4 to 11 years of age) on their production of stop and fricative consonants. Articulation was assessed with ultrasound measurements.</p><p><strong>Results: </strong>The preliminary results reveal that cued speech proficiency seems to sustain the development of speech production in children with cochlear implants and to improve their articulatory gestures, particularly for the place contrast in stops as well as fricatives.</p><p><strong>Conclusion: </strong>This work highlights the importance of studying objective data and comparing acoustic and articulatory measurements to better characterize speech production in children.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4069-4095"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140066246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lisa D Bunker, Dallin J Bailey, Elaine Poss, Shannon Mauszycki, Julie L Wambaugh
{"title":"Stability Over Time of Word Syllable Duration for Speakers With Acquired Apraxia of Speech.","authors":"Lisa D Bunker, Dallin J Bailey, Elaine Poss, Shannon Mauszycki, Julie L Wambaugh","doi":"10.1044/2024_JSLHR-23-00007","DOIUrl":"10.1044/2024_JSLHR-23-00007","url":null,"abstract":"<p><strong>Purpose: </strong>Neurogenic speech and language disorders-such as acquired apraxia of speech (AOS) and aphasia with phonemic paraphasia (APP)-are often misdiagnosed due to similarities in clinical presentation. Word syllable duration (WSD)-a measure of average syllable length in multisyllabic words-serves as a proxy for speech rate, which is an important and arguably more objective clinical characteristic of AOS and APP. This study reports stability of WSD over time for speakers with AOS (and aphasia).</p><p><strong>Method: </strong>Twenty-nine participants with AOS and aphasia (11 women and 18 men, <i>M</i><sub>age</sub> = 53.5 years, <i>SD</i> = 13.3) repeated 30 multisyllabic words (of three-, four-, and five-syllable lengths) on three occasions across 4 weeks. WSDs were calculated for each word and then averaged across each list (i.e., word length), as well as across combined lists (i.e., all 30 words) to yield four WSDs for each participant at each time point. Stability over time was calculated using Friedman's test for the group and using Spearman's rho for the individual level. Effects of time and word length were examined using robust mixed-effects linear regression.</p><p><strong>Results: </strong>Friedman's tests and correlations indicated no significant difference in WSDs across sampling occasions for each word length separately or combined. WSD correlated positively with AOS severity and negatively with intelligibility but was not correlated with aphasia severity. Regression analyses confirmed WSD to be stable over time, while WSD calculated from only five tokens (i.e., WSD-5) was less stable over time.</p><p><strong>Conclusions: </strong>Results indicate that WSD can be a stable measure over time, at the individual and group level, providing support for its use in diagnosis and/or as an outcome measure, both clinically and for research. In general, WSD outperformed WSD-5, suggesting that it may be better to calculate WSD from more than five tokens. Stability of WSD in other populations and suitability for differential diagnosis need to be determined. Currently, differentiating disorders by speaking rate, alone, is not recommended.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.25438735.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4038-4052"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zakiyah A Alsiddiqi, Vesna Stojanovik, Emma Pagnamenta
{"title":"Early Oral Language and Cognitive Predictors of Emergent Literacy Skills in Arabic-Speaking Children: Evidence From Saudi Children With Developmental Language Disorder.","authors":"Zakiyah A Alsiddiqi, Vesna Stojanovik, Emma Pagnamenta","doi":"10.1044/2024_JSLHR-23-00643","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-23-00643","url":null,"abstract":"<p><strong>Purpose: </strong>Although children with developmental language disorder (DLD) are known to have difficulties with emergent literacy skills, few available studies have examined emergent literacy skills in Arabic-speaking children with DLD. Even though Arabic language characteristics, such as diglossia and orthographic structure, influence the acquisition of literacy in Arabic-speaking children, research shows that oral language skills, such as vocabulary, and cognitive skills, such as verbal short-term memory (VSTM), predict literacy in Arabic-speaking children. Moreover, linguistic and memory abilities are impaired in children with DLD, including Arabic-speaking children. The current study examines the relationships between oral language, VSTM, and emergent literacy skills in Arabic-speaking typically developing (TD) children and children with DLD.</p><p><strong>Method: </strong>Participants were 40 TD children (20 girls; aged 4;0-6;11 [years;months]) and 26 children with DLD (nine girls, aged 4;0-6;11). All participants were monolingual Arabic speakers and matched on age and socioeconomic status. A set of comprehensive Arabic language (vocabulary knowledge, morphosyntactic, and listening comprehension skills), VSTM, and emergent literacy (phonological awareness and letter knowledge skills) tests were administered.</p><p><strong>Results: </strong>The DLD group scored significantly lower than the TD group on language, VSTM, and emergent literacy measures. Results revealed that the contributions of oral language and VSTM to emergent literacy skills across TD and DLD groups were different. In the TD group, VSTM predicted emergent literacy skills, whereas in the DLD groups, both vocabulary knowledge and VSTM predicted emergent literacy skills.</p><p><strong>Conclusions: </strong>This study represents an important first step in understanding emergent literacy skills and their relationships to language and memory in Arabic-speaking children with and without DLD. The implications of these findings for clinical and education provision are discussed.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-16"},"PeriodicalIF":2.2,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142407203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}