Journal of Speech Language and Hearing Research最新文献

筛选
英文 中文
Validating the Influences of Methodological Decisions on Assessing the Spatiotemporal Stability of Speech Movement Sequences Using Children's Speech Data. 利用儿童语音数据验证方法决定对评估语音运动序列时空稳定性的影响
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2024-11-11 DOI: 10.1044/2024_JSLHR-24-00190
Alan Wisler, Kristin Teplansky, Janna Berlin, Jun Wang, Lisa Goffman
{"title":"Validating the Influences of Methodological Decisions on Assessing the Spatiotemporal Stability of Speech Movement Sequences Using Children's Speech Data.","authors":"Alan Wisler, Kristin Teplansky, Janna Berlin, Jun Wang, Lisa Goffman","doi":"10.1044/2024_JSLHR-24-00190","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-24-00190","url":null,"abstract":"<p><strong>Purpose: </strong>Prior research introduced quantifiable effects of three methodological parameters (number of repetitions, stimulus length, and parsing error) on the spatiotemporal index (STI) using simulated data. Critically, these parameters often vary across studies. In this study, we validate these effects, which were previously only demonstrated via simulation, using children's speech data.</p><p><strong>Method: </strong>Kinematic data were collected from 30 typically developing children and 15 children with developmental language disorder, all spanning the ages of 6-8 years. All children repeated the sentence \"buy Bobby a puppy\" multiple times. Using these data, experiments were designed to mirror the previous simulated experiments as closely as possible to assess the effects of analytic decisions on the STI. Experiment 1 manipulated number of repetitions, Experiment 2 manipulated stimulus length (or the number of movement units in the target phrase), and Experiment 3 manipulated precision of parsing of the articulatory trajectories.</p><p><strong>Results: </strong>The findings of all three experiments closely mirror those of the prior simulation. Experiment 1 showed consistent underestimation of STI values from smaller repetition counts consistent with the theoretical model for all three participant groups. Experiment 2 found speech segments containing fewer movements yield lower STI values than longer ones. Finally, Experiment 3 showed even small parsing errors are found to significantly increase measured STI values.</p><p><strong>Conclusions: </strong>The results of this study are consistent with the findings of prior simulations in showing that the number of repetitions, length of stimuli, and amount of parsing error can all strongly influence the STI independent of behavioral factors. These results further confirm the importance of closely considering the design of experiments, which employ the STI.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-13"},"PeriodicalIF":2.2,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142632321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Balancing Standardization and Ecological Validity in the Measurement of Social Communication Intervention Outcomes. 在衡量社会沟通干预成果时平衡标准化与生态有效性。
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2024-11-11 DOI: 10.1044/2024_JSLHR-23-00607
Hannah Feiner, Bailey Sone, Jordan Lee, Aaron J Kaat, Megan Y Roberts
{"title":"Balancing Standardization and Ecological Validity in the Measurement of Social Communication Intervention Outcomes.","authors":"Hannah Feiner, Bailey Sone, Jordan Lee, Aaron J Kaat, Megan Y Roberts","doi":"10.1044/2024_JSLHR-23-00607","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-23-00607","url":null,"abstract":"<p><strong>Purpose: </strong>Caregiver-mediated communication intervention outcomes are inconsistently measured, varying by assessment settings, materials, and activities. Standardized materials are often used for measuring outcomes, yet it remains unknown whether such standardized contexts equitably capture caregiver and child intervention outcomes representative of dyads' typical interactions. This within-subject study investigates how intervention outcomes differ between family-selected and standardized interactional contexts for autistic toddlers and their caregivers.</p><p><strong>Method: </strong>Following an 8-week caregiver-mediated telehealth intervention delivered to 22 dyads, caregiver outcomes (fidelity of using responsive communication facilitation strategies) and child outcomes (total spontaneous directed communicative acts) were measured during two interactional contexts using (a) family-selected activities and (b) a standardized toy set. A routines checklist surveyed the activities dyads value, enjoy, complete frequently, and/or find difficult with their child.</p><p><strong>Results: </strong>Caregiver outcomes and child outcomes did not significantly differ between the family-selected and standardized interactional contexts. Descriptive results suggest that the types of toys commonly included in standardized toy sets are representative of the materials many families choose when playing with their child at home. However, during the family-selected interactional context, the majority of dyads also chose materials or activities that were not available to them during the standardized context.</p><p><strong>Conclusion: </strong>It is necessary to carefully consider a more expansive approach to standardization in which intervention outcomes are measured in ecologically valid contexts, which meaningfully, accurately, and equitably capture caregiver and child functional outcomes, and the translation of interventions to families' everyday routines.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-12"},"PeriodicalIF":2.2,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142632299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cortical Tracking of Speech Is Reduced in Adults Who Stutter When Listening for Speaking. 口吃成人在听讲时对语音的大脑皮层跟踪能力下降。
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2024-11-07 Epub Date: 2024-10-22 DOI: 10.1044/2024_JSLHR-24-00227
Simone Gastaldon, Pierpaolo Busan, Nicola Molinaro, Mikel Lizarazu
{"title":"Cortical Tracking of Speech Is Reduced in Adults Who Stutter When Listening for Speaking.","authors":"Simone Gastaldon, Pierpaolo Busan, Nicola Molinaro, Mikel Lizarazu","doi":"10.1044/2024_JSLHR-24-00227","DOIUrl":"10.1044/2024_JSLHR-24-00227","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this study was to investigate cortical tracking of speech (CTS) in adults who stutter (AWS) compared to typically fluent adults (TFAs) to test the involvement of the speech-motor network in tracking rhythmic speech information.</p><p><strong>Method: </strong>Participants' electroencephalogram was recorded while they simply listened to sentences (listening only) or completed them by naming a picture (listening for speaking), thus manipulating the upcoming involvement of speech production. We analyzed speech-brain coherence and brain connectivity during listening.</p><p><strong>Results: </strong>During the listening-for-speaking task, AWS exhibited reduced CTS in the 3- to 5-Hz range (theta), corresponding to the syllabic rhythm. The effect was localized in the left inferior parietal and right pre/supplementary motor regions. Connectivity analyses revealed that TFAs had stronger information transfer in the theta range in both tasks in fronto-temporo-parietal regions. When considering the whole sample of participants, increased connectivity from the right superior temporal cortex to the left sensorimotor cortex was correlated with faster naming times in the listening-for-speaking task.</p><p><strong>Conclusions: </strong>Atypical speech-motor functioning in stuttering impacts speech perception, especially in situations requiring articulatory alertness. The involvement of frontal and (pre)motor regions in CTS in TFAs is highlighted. Further investigation is needed into speech perception in individuals with speech-motor deficits, especially when smooth transitioning between listening and speaking is required, such as in real-life conversational settings.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27234885.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4339-4357"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142512653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hearing Impairment: Reduced Pupil Dilation Response and Frontal Activation During Degraded Speech Perception. 听力障碍:语音感知能力下降时的瞳孔扩张反应和额叶激活减少。
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2024-11-07 Epub Date: 2024-10-11 DOI: 10.1044/2024_JSLHR-24-00017
Adriana A Zekveld, Sophia E Kramer, Dirk J Heslenfeld, Niek J Versfeld, Chris Vriend
{"title":"Hearing Impairment: Reduced Pupil Dilation Response and Frontal Activation During Degraded Speech Perception.","authors":"Adriana A Zekveld, Sophia E Kramer, Dirk J Heslenfeld, Niek J Versfeld, Chris Vriend","doi":"10.1044/2024_JSLHR-24-00017","DOIUrl":"10.1044/2024_JSLHR-24-00017","url":null,"abstract":"<p><strong>Purpose: </strong>A relevant aspect of listening is the effort required during speech processing, which can be assessed by pupillometry. Here, we assessed the pupil dilation response of normal-hearing (NH) and hard of hearing (HH) individuals during listening to clear sentences and masked or degraded sentences. We combined this assessment with functional magnetic resonance imaging (fMRI) to investigate the neural correlates of the pupil dilation response.</p><p><strong>Method: </strong>Seventeen NH participants (<i>M</i><sub>age</sub> = 46 years) were compared to 17 HH participants (<i>M</i><sub>age</sub> = 45 years) who were individually matched in age and educational level. Participants repeated sentences that were presented clearly, that were distorted, or that were masked. The sentence intelligibility level of masked and distorted sentences was 50% correct. Silent baseline trials were presented as well. Performance measures, pupil dilation responses, and fMRI data were acquired.</p><p><strong>Results: </strong>HH individuals had overall poorer speech reception than the NH participants, but not for noise-vocoded speech. In addition, an interaction effect was observed with smaller pupil dilation responses in HH than in NH listeners for the degraded speech conditions. Hearing impairment was associated with higher activation across conditions in the left superior temporal gyrus, as compared to the silent baseline. However, the region of interest analysis indicated lower activation during degraded speech relative to clear speech in bilateral frontal regions and the insular cortex, for HH compared to NH listeners. Hearing impairment was also associated with a weaker relation between the pupil response and activation in the right inferior frontal gyrus. Overall, degraded speech evoked higher frontal activation than clear speech.</p><p><strong>Conclusion: </strong>Brain areas associated with attentional and cognitive-control processes may be increasingly recruited when speech is degraded and are related to the pupil dilation response, but this relationship is weaker in HH listeners.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27162135.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4549-4566"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142407204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Articulatory Analysis of American English Rhotics in Children With and Without a History of Residual Speech Sound Disorder. 对有和无残余发音障碍史儿童的美式英语发音分析
IF 4.6 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2024-11-07 Epub Date: 2024-10-14 DOI: 10.1044/2024_JSLHR-24-00037
Amanda Eads, Heather Kabakoff, Hannah King, Jonathan L Preston, Tara McAllister
{"title":"An Articulatory Analysis of American English Rhotics in Children With and Without a History of Residual Speech Sound Disorder.","authors":"Amanda Eads, Heather Kabakoff, Hannah King, Jonathan L Preston, Tara McAllister","doi":"10.1044/2024_JSLHR-24-00037","DOIUrl":"10.1044/2024_JSLHR-24-00037","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated articulatory patterns for American English /ɹ/ in children with and without a history of residual speech sound disorder (RSSD). It was hypothesized that children without RSSD would favor bunched tongue shapes, similar to American adults reported in previous literature. Based on clinical cueing practices, it was hypothesized that children with RSSD might produce retroflex tongue shape patterns at a higher relative rate. Finally, it was hypothesized that, among children who use a mixture of bunched and retroflex shapes, phonetic context would impact tongue shape as reported in the adult literature.</p><p><strong>Method: </strong>These hypotheses were tested using ultrasound data from a stimulability task eliciting /ɹ/ in syllabic, postvocalic, and onset contexts. Participants were two groups of children/adolescents aged 9-15 years: 36 with RSSD who completed a study of ultrasound biofeedback treatment and 33 with no history of RSSD. Tongue shapes were qualitatively coded as bunched or retroflex using a flowchart from previous research.</p><p><strong>Results: </strong>Children with no history of RSSD were found to use bunched-only tongue shape patterns at a rate higher than adults, but those who used a mixture of shapes for /ɹ/ followed the expected phonetic contextual patterning. Children with RSSD were found to use retroflex-only patterns at a substantially higher rate than adults, and those using a mixture of shapes did not exhibit the expected patterning by phonetic context.</p><p><strong>Conclusions: </strong>These findings suggest that clients receiving ultrasound biofeedback treatment for /ɹ/ may be most responsive to clinician cueing of retroflex shapes, at least early on. However, retroflex-only cueing may be a limiting and insufficient strategy, particularly in light of our finding of a lack of typical variation across phonetic contexts in children with remediated /ɹ/. Future research should more specifically track cueing strategies to better understand the relationship between clinician cues, tongue shapes, and generalization across a range of contexts.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.26801050.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4246-4263"},"PeriodicalIF":4.6,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11567108/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FluencyBank Timestamped: An Updated Data Set for Disfluency Detection and Automatic Intended Speech Recognition. 流利度数据库(FluencyBank)时间戳:用于流畅性检测和自动意图语音识别的最新数据集。
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2024-11-07 Epub Date: 2024-10-08 DOI: 10.1044/2024_JSLHR-24-00070
Amrit Romana, Minxue Niu, Matthew Perez, Emily Mower Provost
{"title":"FluencyBank Timestamped: An Updated Data Set for Disfluency Detection and Automatic Intended Speech Recognition.","authors":"Amrit Romana, Minxue Niu, Matthew Perez, Emily Mower Provost","doi":"10.1044/2024_JSLHR-24-00070","DOIUrl":"10.1044/2024_JSLHR-24-00070","url":null,"abstract":"<p><strong>Purpose: </strong>This work introduces updated transcripts, disfluency annotations, and word timings for FluencyBank, which we refer to as FluencyBank Timestamped. This data set will enable the thorough analysis of how speech processing models (such as speech recognition and disfluency detection models) perform when evaluated with typical speech versus speech from people who stutter (PWS).</p><p><strong>Method: </strong>We update the FluencyBank data set, which includes audio recordings from adults who stutter, to explore the robustness of speech processing models. Our update (semi-automated with manual review) includes new transcripts with timestamps and disfluency labels corresponding to each token in the transcript. Our disfluency labels capture typical disfluencies (filled pauses, repetitions, revisions, and partial words), and we explore how speech model performance compares for Switchboard (typical speech) and FluencyBank Timestamped. We present benchmarks for three speech tasks: intended speech recognition, text-based disfluency detection, and audio-based disfluency detection. For the first task, we evaluate how well Whisper performs for intended speech recognition (i.e., transcribing speech without disfluencies). For the next tasks, we evaluate how well a Bidirectional Embedding Representations from Transformers (BERT) text-based model and a Whisper audio-based model perform for disfluency detection. We select these models, BERT and Whisper, as they have shown high accuracies on a broad range of tasks in their language and audio domains, respectively.</p><p><strong>Results: </strong>For the transcription task, we calculate an intended speech word error rate (isWER) between the model's output and the speaker's intended speech (i.e., speech without disfluencies). We find isWER is comparable between Switchboard and FluencyBank Timestamped, but that Whisper transcribes filled pauses and partial words at higher rates in the latter data set. Within FluencyBank Timestamped, isWER increases with stuttering severity. For the disfluency detection tasks, we find the models detect filled pauses, revisions, and partial words relatively well in FluencyBank Timestamped, but performance drops substantially for repetitions because the models are unable to generalize to the different types of repetitions (e.g., multiple repetitions and sound repetitions) from PWS. We hope that FluencyBank Timestamped will allow researchers to explore closing performance gaps between typical speech and speech from PWS.</p><p><strong>Conclusions: </strong>Our analysis shows that there are gaps in speech recognition and disfluency detection performance between typical speech and speech from PWS. We hope that FluencyBank Timestamped will contribute to more advancements in training robust speech processing models.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4203-4215"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imitation of Multisyllabic Items by Children With Developmental Language Disorder: Evidence for Word-Level Atypical Speech Envelope and Pitch Contours. 发育性语言障碍儿童对多音节词项的模仿:单词级非典型语音包络和音高轮廓的证据。
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2024-11-07 Epub Date: 2024-10-11 DOI: 10.1044/2024_JSLHR-24-00031
Lyla Parvez, Mahmoud Keshavarzi, Susan Richards, Giovanni M Di Liberto, Usha Goswami
{"title":"Imitation of Multisyllabic Items by Children With Developmental Language Disorder: Evidence for Word-Level Atypical Speech Envelope and Pitch Contours.","authors":"Lyla Parvez, Mahmoud Keshavarzi, Susan Richards, Giovanni M Di Liberto, Usha Goswami","doi":"10.1044/2024_JSLHR-24-00031","DOIUrl":"10.1044/2024_JSLHR-24-00031","url":null,"abstract":"<p><strong>Purpose: </strong>Developmental language disorder (DLD) is a multifaceted disorder. Recently, interest has grown in prosodic aspects of DLD, but most investigations of possible prosodic causes focus on speech perception tasks. Here, we focus on speech production from a speech amplitude envelope (AE) perspective. Perceptual studies have indicated a role for difficulties in AE processing in DLD related to sensory/neural processing of prosody. We explore possible matching AE difficulties in production.</p><p><strong>Method: </strong>Fifty-seven children with and without DLD completed a computerized imitation task, copying aloud 30 familiar targets such as \"alligator.\" Children with DLD (<i>n</i> = 20) were compared with typically developing children (age-matched controls [AMC], <i>n</i> = 21) and younger language controls (YLC, <i>n</i> = 16). Similarity of the child's productions to the target in terms of the continuous AE and pitch contour was computed using two similarity metrics, correlation, and mutual information. Both the speech AE and the pitch contour contain important information about stress patterning and intonational information over time.</p><p><strong>Results: </strong>Children with DLD showed significantly reduced imitation for both the AE and pitch contour metrics compared to AMC children. The opportunity to repeat the targets had no impact on performance for any group. Word length effects were similar across groups.</p><p><strong>Conclusions: </strong>The spoken production of multisyllabic words by children with DLD is atypical regarding both the AE and the pitch contour. This is consistent with a theoretical explanation of DLD based on impaired sensory/neural processing of low-frequency (slow) amplitude and frequency modulations, as predicted by the temporal sampling theory.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27165690.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4288-4303"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142407205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sharing Stories Versus Explaining Facts: Comparing African American Children's Microstructure Performance Across Fictional Narrative, Informational, and Procedural Discourse. 分享故事与解释事实:比较非裔美国儿童在小说叙事、信息和程序性话语中的微观结构表现。
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2024-11-07 Epub Date: 2024-10-11 DOI: 10.1044/2024_JSLHR-23-00579
Nicole Gardner-Neblett, Dulce Lopez Alvarez
{"title":"Sharing Stories Versus Explaining Facts: Comparing African American Children's Microstructure Performance Across Fictional Narrative, Informational, and Procedural Discourse.","authors":"Nicole Gardner-Neblett, Dulce Lopez Alvarez","doi":"10.1044/2024_JSLHR-23-00579","DOIUrl":"10.1044/2024_JSLHR-23-00579","url":null,"abstract":"<p><strong>Purpose: </strong>Both fictional oral narrative and expository oral discourse skills are critical language competencies that support children's academic success. Few studies, however, have examined African American children's microstructure performance across these genres. To address this gap in the literature, the study compared African American children's microstructure productivity and complexity across three discourse contexts: fictional narratives, informational discourse, and procedural discourse. The study also examined whether there were age-related differences in microstructure performance by discourse type.</p><p><strong>Method: </strong>Participants were 130 typically developing African American children, aged 59-95 months old, enrolled in kindergarten through second grades in a Midwestern U.S. public school district. Wordless children's books were used to elicit fictional narratives, informational, and procedural discourse. Indicators of microstructure performance included measures of productivity (i.e., number of total words and number of different words) and complexity (i.e., mean length of communication unit and complex syntax rate). The effects of genre and age on microstructure performance were assessed using linear mixed-effects regression models.</p><p><strong>Results: </strong>Children produced longer discourse and used a greater diversity of words for their fictional stories compared to their informational or procedural discourse. Grammatical complexity was greater for fictional narratives and procedural discourse than informational discourse. Results showed greater productivity and complexity among older children compared to younger children, particularly for fictional and informational discourse.</p><p><strong>Conclusions: </strong>African American children exhibit variation in their microstructure performance by discourse context and age. Understanding this variation is key to providing African American children with support to maximize their oral language competencies.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4431-4445"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142407206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Decoding of Spontaneous Overt and Intended Speech. 自发言语和有意言语的神经解码
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2024-11-07 Epub Date: 2024-08-06 DOI: 10.1044/2024_JSLHR-24-00046
Debadatta Dash, Paul Ferrari, Jun Wang
{"title":"Neural Decoding of Spontaneous Overt and Intended Speech.","authors":"Debadatta Dash, Paul Ferrari, Jun Wang","doi":"10.1044/2024_JSLHR-24-00046","DOIUrl":"10.1044/2024_JSLHR-24-00046","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to decode intended and overt speech from neuromagnetic signals while the participants performed spontaneous overt speech tasks without cues or prompts (stimuli).</p><p><strong>Method: </strong>Magnetoencephalography (MEG), a noninvasive neuroimaging technique, was used to collect neural signals from seven healthy adult English speakers performing spontaneous, overt speech tasks. The participants randomly spoke the words yes or no at a self-paced rate without cues. Two machine learning models, namely, linear discriminant analysis (LDA) and one-dimensional convolutional neural network (1D CNN), were employed to classify the two words from the recorded MEG signals.</p><p><strong>Results: </strong>LDA and 1D CNN achieved average decoding accuracies of 79.02% and 90.40%, respectively, in decoding overt speech, significantly surpassing the chance level (50%). The accuracy for decoding intended speech was 67.19% using 1D CNN.</p><p><strong>Conclusions: </strong>This study showcases the possibility of decoding spontaneous overt and intended speech directly from neural signals in the absence of perceptual interference. We believe that these findings make a steady step toward the future spontaneous speech-based brain-computer interface.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4216-4225"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conflict Adaptation in Aphasia: Upregulating Cognitive Control for Improved Sentence Comprehension. 失语症的冲突适应:上调认知控制以改善句子理解。
IF 4.6 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2024-11-07 Epub Date: 2024-10-08 DOI: 10.1044/2024_JSLHR-23-00768
Anna Krason, Erica L Middleton, Matthew E P Ambrogi, Malathi Thothathiri
{"title":"Conflict Adaptation in Aphasia: Upregulating Cognitive Control for Improved Sentence Comprehension.","authors":"Anna Krason, Erica L Middleton, Matthew E P Ambrogi, Malathi Thothathiri","doi":"10.1044/2024_JSLHR-23-00768","DOIUrl":"10.1044/2024_JSLHR-23-00768","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated conflict adaptation in aphasia, specifically whether upregulating cognitive control improves sentence comprehension.</p><p><strong>Method: </strong>Four individuals with mild aphasia completed four eye tracking sessions with interleaved auditory Stroop and sentence-to-picture matching trials (critical and filler sentences). Auditory Stroop congruency (congruent/incongruent across a male/female voice saying \"boy\"/\"girl\") was crossed with sentence congruency (syntactically correct sentences that are semantically plausible/implausible), resulting in four experimental conditions (congruent auditory Stroop followed by incongruent sentence [CI], incongruent auditory Stroop followed by incongruent sentence [II], congruent auditory Stroop followed by congruent sentence [CC], and incongruent auditory Stroop followed by congruent sentence [IC]). Critical sentences were always preceded by auditory Stroop trials. At the end of each session, a five-item questionnaire was administered to assess overall well-being and fatigue. We conducted individual-level mixed-effects regressions on reaction times and growth curve analyses on the proportion of eye fixations to target pictures during incongruent sentences.</p><p><strong>Results: </strong>One participant showed conflict adaptation indicated by faster reaction times on active sentences and more rapid growth in fixations to target pictures on passive sentences in the II condition compared to the CI condition. Incongruent auditory Stroop also modulated active-sentence processing in an additional participant, as indicated by eye movements.</p><p><strong>Conclusions: </strong>This is the first study to observe conflict adaptation in sentence comprehension in people with aphasia. The extent of adaptation varied across individuals. Eye tracking revealed subtler effects than overt behavioral measures. The results extend the study of conflict adaptation beyond neurotypical adults and suggest that upregulating cognitive control may be a potential treatment avenue for some individuals with aphasia.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27056149.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4411-4430"},"PeriodicalIF":4.6,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11567075/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信