Batya Elbaum, Lynn K Perry, Christina M Sarangoulis, Kenneth W Goodman, Daniel S Messinger, Ivette Cejas
{"title":"The Use of Automated Digital Data in Speech, Language, and Hearing Research: Confronting a New Ethical Landscape.","authors":"Batya Elbaum, Lynn K Perry, Christina M Sarangoulis, Kenneth W Goodman, Daniel S Messinger, Ivette Cejas","doi":"10.1044/2025_JSLHR-24-00819","DOIUrl":"10.1044/2025_JSLHR-24-00819","url":null,"abstract":"<p><strong>Purpose: </strong>The use of automated, digital data collection, such as daylong audio recordings of children's language environments, is yielding important insights for both researchers and practitioners in the field of communication disorders. However, ethical issues involved in the use of digital tools for research purposes have yet to be thoroughly explored.</p><p><strong>Method: </strong>In this commentary, we draw on our experience with automated data collection in inclusive auditory oral preschool classrooms, as well as interviews with parents, teachers, speech-language pathologists, researchers, and other community members, to identify key areas of ethical concern and draw out implications for future research.</p><p><strong>Conclusion: </strong>We discuss specific issues and recommendations related to three emerging areas of concern: data storage and data sharing, the return of results to research participants, and the communication of incidental findings.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4087-4093"},"PeriodicalIF":2.2,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12384941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144236746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of Verb and Noun Word Patterns in Arabic: A Comparison Between Typically Developing Children and Those With Reading Difficulties.","authors":"Ibrahim A Asadi, Khaloob Kawar, Gubair Tarabeh","doi":"10.1044/2025_JSLHR-24-00673","DOIUrl":"10.1044/2025_JSLHR-24-00673","url":null,"abstract":"<p><strong>Purpose: </strong>The study aims to investigate the developmental process of producing morphological word patterns of verbs (such as <i>/sˤɑnɑʢ</i>/, which means \"he produced\") versus nouns (such as /<i>mɑsˤnɑʢ</i>/, which means \"factory\") among Arabic-speaking children from first to sixth grades, including children with and without reading difficulties.</p><p><strong>Method: </strong>The research involved 1,469 Arabic-speaking children from first to sixth grades, among whom 177 children were identified as having reading difficulties. All children were tested by a morphological production task.</p><p><strong>Results: </strong>The analysis of variance on the retrieved word patterns showed significant main effects for word patterns, grade level, and reading proficiency, with a consistent advantage observed for verb patterns and higher performance among children with typical reading compared to those with reading difficulties. Furthermore, interactions of word patterns with both grade level and reading proficiency were found.</p><p><strong>Conclusions: </strong>The study highlights the importance of morphological acquisition in literacy development, particularly in the intricate relationship between morphology and semantics in Arabic. The findings suggest the potential benefit of specialized morphology instruction, particularly for students with reading challenges. Although the need and impact require further research, this study offers a valuable starting point, indicating developmental trajectories that educators should consider when integrating explicit instruction of verb and noun patterns as a potentially supportive element in literacy education.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"3976-3988"},"PeriodicalIF":2.2,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144577593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Role of Auditory and Visual Modality in Perception of Focus in Mandarin Chinese.","authors":"Shanpeng Li, Yihan Wu, Sasha Calhoun, Mengzhu Yan","doi":"10.1044/2025_JSLHR-24-00664","DOIUrl":"10.1044/2025_JSLHR-24-00664","url":null,"abstract":"<p><strong>Purpose: </strong>Speech perception is a complex process that involves multiple sensory modalities. Despite our intuitions of speech as something we hear, accumulating evidence has shown that speech perception is not solely dependent on the auditory modality. While it is well established that auditory and visual cues can both help listeners perceive focus, the latter is not established in Mandarin, and the relative contribution of these cues is not established at all. The current study investigated Mandarin listeners' integration of auditory and visual cues in the interpretation of focus in noise-degraded speech, through a question-answer appropriateness rating experiment.</p><p><strong>Method: </strong>To explore the effectiveness and relative contribution of auditory and visual modality in the interpretation of Mandarin focus, participants did a question-answer appropriateness rating task involving subject focus, object focus, and broad focus. All the question-answer pairs were constructed in three modalities: audio only, visual only, and audiovisual. They were instructed to rate the appropriateness of the question-answer pairs. A babble noise was superimposed on the audio track for the audio only and audiovisual conditions.</p><p><strong>Results and conclusions: </strong>Although auditory cues via prosodic prominence were an effective cue to interpreting focus, visual cues were proven more effective, at least with degraded audio. Overall, this research contributes to our understanding of the interaction between linguistic cues and sensory information during language comprehension, widens the range of languages included in this body of research, and provides important implications for future studies on focus processing in various linguistic contexts and communication settings. This, in turn, will deepen our understanding of the multimodal nature of language comprehension.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"3843-3860"},"PeriodicalIF":2.2,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144612782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multilingualism, Speech Disfluencies, and Stuttering: A Scoping Review.","authors":"Gizem Aslan, Kurt Eggers","doi":"10.1044/2025_JSLHR-24-00479","DOIUrl":"10.1044/2025_JSLHR-24-00479","url":null,"abstract":"<p><strong>Purpose: </strong>This scoping review examined differences in types and/or frequency of speech disfluencies between multilingual individuals who do and do not stutter. We also examined whether language dominance and/or proficiency influences the types and frequency of speech disfluencies.</p><p><strong>Method: </strong>The review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for scoping reviews guidelines. The search was conducted using inclusive search strings related to multilingualism and speech disfluencies in Medline, Web of Science, Scopus, and Embase. The following information was extracted for each of the studies: general study information (authors, title, year, research field, geographic location), participant information (number of participants, types of study groups, age groups, language dyads), study method, types of collected speech samples, terms used for referring to disfluencies, the definition of the term \"disfluency,\" the types of disfluencies assessed, the proposed causal mechanism of disfluencies in multilinguals, the frequency of disfluencies, and identified group differences in disfluencies. Of the 792 records screened, 68 were included in the review.</p><p><strong>Results: </strong>Similar types of speech disfluencies were present in the speech of multilinguals who do and do not stutter. However, a clear difference was apparent in the frequency of stuttering-like disfluencies between groups; the frequency of other disfluencies had a similar range. Monolingual guidelines do not apply to multilingual speakers. Finally, most records reported a higher frequency of speech disfluencies in both groups' less dominant and/or proficient language.</p><p><strong>Conclusions: </strong>This review provides insights on assessing stuttering in multilingual clients to avoid misdiagnosis of stuttering in this population. Research into the aspects of speech disfluencies in multilingual individuals who do and do not stutter is limited, and further research is warranted to deepen our understanding of how different aspects of multilingualism influence the manifestation of speech disfluencies in both groups. Therefore, there is a strong need for a systematic and uniform approach to define and evaluate speech disfluencies in multilinguals.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29441882.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"3869-3886"},"PeriodicalIF":2.2,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144645209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cepstral Peak Prominence in Nondysphonic Children Using Praat and Analysis of Dysphonia in Speech and Voice.","authors":"Ashwini Joshi, Shaheen N Awan, Marianna Rubino, Danielle Devore, Teresa Procter, Julina Ongkasuwan","doi":"10.1044/2025_JSLHR-25-00046","DOIUrl":"10.1044/2025_JSLHR-25-00046","url":null,"abstract":"<p><strong>Purpose: </strong>This study examined the effect of age on cepstral peak prominence (CPP) in nondysphonic children between 3;0 and 17;11 (years;months) for two computer programs: Analysis of Dysphonia in Speech and Voice (ADSV) and Praat. Normative estimates for this population, the effect of sex, software, and stimuli on CPP, and the covarying impact of fundamental frequency (<i>F</i>0) were examined.</p><p><strong>Method: </strong>CPP and <i>F</i>0 were collected for 103 children (44 males, 59 females) from the vowel /a/ and the all-voiced sentence \"We were away a year ago,\" within the following age ranges: 3;0-6;11, 7;0-10;11, 11;0-14;11, and 15;0-17;11. Effects of age, sex, stimuli, and software were examined using analyses of variance and post hoc means comparisons. The presence and strength of relationships between age, CPP, <i>F</i>0, and measures of CPP using ADSV versus Praat were evaluated using Pearson's and Spearman's correlations. Stepwise multiple regression analyses were computed to predict CPP from age and <i>F</i>0. Estimates of CPP normative cutoffs for Age × Sex groupings were also calculated.</p><p><strong>Results: </strong>Significant differences between 15;0-17;11 versus younger age children and a significant correlation between age and CPP were observed. Mean CPP values differed by sex, stimuli, and software. Age and <i>F</i>0 are significant predictors of CPP; however, the observed increase in CPP with increasing age in males is primarily due to the substantial decrease in <i>F</i>0 postpuberty. Significant effects of stimuli and software on CPP values were also observed.</p><p><strong>Conclusions: </strong>The findings support the hypotheses that CPP is correlated with age during the 3;0-17;11 span, with particular increases in postpubertal children. However, \"normative\" age-based expectations should be approached with caution since the general effect of age may be superseded by specific changes in <i>F</i>0 where a lowering of <i>F</i>0 is significantly associated with increases in CPP.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29395787.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"3733-3747"},"PeriodicalIF":2.2,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144556528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naomi H Rodgers, Hope Gerlach-Houck, Andrea Paiva, Mark Robbins
{"title":"Measuring Adults' Readiness to Make a Positive Change to Stuttering and the Cognitive Processes That Predict It.","authors":"Naomi H Rodgers, Hope Gerlach-Houck, Andrea Paiva, Mark Robbins","doi":"10.1044/2025_JSLHR-24-00787","DOIUrl":"10.1044/2025_JSLHR-24-00787","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this study was to validate three interdependent scales of readiness to change among adults who stutter. The scales are based on the transtheoretical model (TTM) that we previously developed through qualitative work with adults who stutter and stuttering specialists regarding the characteristics of making a positive change to how they live with stuttering.</p><p><strong>Method: </strong>The anonymous, online survey was fully completed by 246 North American adults who stutter. The survey included three TTM scales (Stage of Change, Decisional Balance, and Situational Self-Efficacy) and the Quality of Life subscale of the Overall Assessment of the Speaker's Experience of Stuttering (OASES-IV). Exploratory factor analyses were conducted to determine model fit and reduce the TTM scales to the most meaningful items. External validity was assessed by examining relationships between constructs.</p><p><strong>Results: </strong>The five stages of change readily applied to adults' readiness to make positive changes to how they live with stuttering. The Decisional Balance scale was reduced to 20 items subcategorized into three subscales (Interpersonal Pros, Internal Pros, and Cons), which all differed significantly across stages of change. The Situational Self-Efficacy scale was reduced to 17 items subcategorized into two subscales (Interpersonal Situations and Internal Situations), of which the former differed significantly across stages of change. The OASES-IV differed significantly across stages of change.</p><p><strong>Conclusions: </strong>The findings suggest that, for adults, the behaviors subsumed in making positive changes to stuttering fit the TTM framework, including the stages of change and the cognitive predictors of readiness to change (decisional balance and situational self-efficacy). The relationship among our measures (except for the cons of change) mirrors how these measures behave in adolescents who stutter and other health populations, further corroborating the application of the TTM to the stuttering experience. Future research confirming the validity of these measures across stages of change is warranted, as well as how these measures can inform stage-matched interventions for people who stutter.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29319278.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"3703-3719"},"PeriodicalIF":2.2,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144532783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donguk Lee, James D Lewis, Ashley Harkrider, Mark Hedrick
{"title":"Effects of Contralateral Noise on Cortical Auditory Evoked Potential Latencies and Amplitudes.","authors":"Donguk Lee, James D Lewis, Ashley Harkrider, Mark Hedrick","doi":"10.1044/2025_JSLHR-24-00698","DOIUrl":"10.1044/2025_JSLHR-24-00698","url":null,"abstract":"<p><strong>Purpose: </strong>There is evidence from past animal work that the neural signal-to-noise ratio (SNR) is modulated through the action of the medial olivocochlear reflex (MOCR). This is commonly referred to as unmasking. However, evidence of unmasking in humans is limited, perhaps due to the traditional approach of measuring the MOCR using otoacoustic emissions-a preneural metric. The amplitudes and latencies of the late latency response (LLR) are sensitive to changes in SNR and may provide a means to noninvasively evaluate MOCR unmasking at the neural level. The purpose of this study was to investigate MOCR-mediated enhancement of ipsilateral noise in humans using the LLR.</p><p><strong>Method: </strong>Fifty normal-hearing adults were recruited. The LLR was measured for a 60 dB SPL, 1-kHz tone in both ipsilateral quiet and ipsilateral noise, with and without presentation of contralateral noise. For the ipsilateral noise conditions, the noise was presented at three different levels to achieve SNRs of +5 dB, +15 dB, and +25 dB. The contralateral noise was always 60 dB SPL white noise. LLR latencies (P1, N1, and P2) and interpeak amplitudes (P1-N1 and N1-P2) were measured for all conditions. In addition, otoacoustic emissions (OAEs) for a 1-kHz tone burst were measured in ipsilateral quiet both with and without contralateral noise. The same contralateral noise was used for both OAEs and LLRs.</p><p><strong>Results: </strong>For the ipsilateral noise conditions, SNR had a significant effect on LLR latencies and interpeak amplitudes: Latencies decreased, and amplitudes increased as SNR improved. The presentation of contralateral noise had a significant effect on P1 and N1 latencies, both of which decreased. LLR interpeak amplitudes significantly increased upon the presentation of contralateral noise. For the ipsilateral quiet condition, there were no significant effects of contralateral noise on LLR metrics. Though OAE magnitudes were significantly reduced upon presentation of contralateral noise, consistent significant relationships between OAE magnitude changes and changes in the LLR metrics were not found.</p><p><strong>Conclusion: </strong>Findings suggest that the presentation of contralateral noise enhances the neural response to an ipsilateral noise, potentially through MOC efferent feedback.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29441903.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4123-4138"},"PeriodicalIF":2.2,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Initial Evaluation of a New Auditory Attention Task for Assessing Alerting, Orienting, and Executive Control Attention.","authors":"Arianna N LaCroix, Emily Sebranek","doi":"10.1044/2025_JSLHR-24-00513","DOIUrl":"10.1044/2025_JSLHR-24-00513","url":null,"abstract":"<p><strong>Purpose: </strong>Attention is a key cognitive function crucial for selecting and processing information. It is often divided into three components: alerting, orienting, and executive control. While there are tasks designed to simultaneously assess the attentional subsystems in the visual modality, creating an effective auditory task has been challenging, especially for clinical populations. This study aimed to explore whether a new Auditory Attention Task (AAT) measures all three attentional subsystems in neurotypical controls.</p><p><strong>Method: </strong>Forty-eight young adults completed the AAT, where they judged the duration of the first of two tones while ignoring the second tone's duration. Executive control was assessed by comparing performance on trials with conflict (incongruent) and without conflict (congruent). The tones could also differ on frequency and performance differences between trials with same versus different frequencies measured orienting attention. A warning cue was presented before the first pure tone on half of the trials. Alerting attention was measured by comparing performance on trials with and without a warning cue.</p><p><strong>Results: </strong>The AAT measured alerting, orienting, and executive control attention as expected. Participants were faster on warned than nonwarned trials (alerting) and on same- versus different-frequency trials (orienting). Participants were also faster and more accurate on same- versus different-duration trials (executive control). We also observed several interactions between the attentional subsystems.</p><p><strong>Conclusions: </strong>Our results demonstrate that the AAT measured alerting, orienting, and executive control attention. However, additional work is needed to explore the AAT's utility in clinical populations, such as people with aphasia.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29525717.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4049-4060"},"PeriodicalIF":2.2,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12384943/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuqi Xia, Lei Ren, Xuehao Zhang, Yan Huang, Chaogang Wei, Yuhe Liu
{"title":"Effects of Hearing Aids on Mandarin Voice Emotion Recognition With Bimodal Listeners.","authors":"Yuqi Xia, Lei Ren, Xuehao Zhang, Yan Huang, Chaogang Wei, Yuhe Liu","doi":"10.1044/2025_JSLHR-23-00191","DOIUrl":"10.1044/2025_JSLHR-23-00191","url":null,"abstract":"<p><strong>Purpose: </strong>Cochlear implant (CI) listeners have deficits in emotional perception due to limited spectrotemporal fine structure. Contralateral hearing aids (HAs) carry additional acoustic cues for emotion recognition and improve the quality of life (QoL) in these individuals. This study aimed to investigate the effects of HAs on voice emotion recognition in Mandarin-speaking bimodal adults.</p><p><strong>Method: </strong>Nineteen Mandarin-speaking bimodal adults (<i>M</i><sub>age</sub> = 30.63 ± 8.73 years) and 20 normal-hearing (NH) adults (<i>M</i><sub>age</sub> = 27.15 ± 4.61 years) completed voice emotion (happy, angry, sad, scared, and neutral) recognition and monosyllable recognition tasks. Bimodal listeners completed voice emotion recognition and monosyllable recognition tasks with bimodal listening and CI-alone listening. Health-related QoL in bimodal listeners was evaluated using the Chinese version of the Nijmegen Cochlear Implant Questionnaire (NCIQ).</p><p><strong>Results: </strong>Acoustic analyses showed substantial variations across emotions in voice emotion utterances, mainly in measures of the mean fundamental frequency (<i>F</i>0), <i>F</i>0 range, and duration. NH listeners significantly outperformed bimodal listeners in voice emotion recognition and monosyllable recognition tasks, with significantly higher accuracy scores, Hu values, and shorter reaction times. Participants were mainly affected by <i>F</i>0 cues in the voice emotion recognition task. Bimodal listeners perceived voice emotions more accurately and faster with bimodal devices than with CI alone, suggesting improved accuracy and decreased listening effort with the addition of HAs. Voice emotion recognition accuracy was associated with residual hearing in the nonimplanted ear and monosyllable recognition accuracy in bimodal listeners. The NCIQ scores were not significantly correlated with the accuracy scores for either speech recognition or voice emotion recognition in bimodal listeners after correction for multiple comparisons.</p><p><strong>Conclusions: </strong>Despite experiencing more challenges than NH peers, Mandarin-speaking bimodal listeners showed improved voice emotion perception when using contralateral HAs. Bimodal listeners with better residual hearing in the nonimplanted ear and better speech recognition ability showed better voice emotion perception.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4139-4157"},"PeriodicalIF":2.2,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144652064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spoken Language Dual-Task Effects in Typical Aging: A Systematic Review.","authors":"Christos Salis, Laura L Murray, Rawand Jarrar","doi":"10.1044/2025_JSLHR-24-00826","DOIUrl":"10.1044/2025_JSLHR-24-00826","url":null,"abstract":"<p><strong>Purpose: </strong>Many studies have shown that several spoken language production skills are negatively affected by the typical aging process. In contrast, how language is affected when older adults are asked to speak under conditions of distraction using dual- or multitask paradigms has received less empirical attention, even though such conditions align with the demands of everyday communication contexts. Accordingly, the objectives in this original systematic review were to synthesize and appraise literature on spoken language production in neurotypical older adults when they talk under conditions of distraction. To our knowledge, this is the first systematic review that focuses on this topic.</p><p><strong>Method: </strong>Five databases (EMBASE, LLBA, Medline, PsycINFO, Web of Science Core Collection) were searched (from databases' inception to January 2024) for eligible studies using comprehensive search terms. All steps from screening of records, selection of studies, data extraction, and critical appraisal were carried out by two reviewers who worked independently.</p><p><strong>Results: </strong>Thirteen studies culminated in the qualitative evidence synthesis. Critical appraisal was carried out and showed that the current evidence base is overall weak.</p><p><strong>Conclusions: </strong>The findings were mixed as to whether dual-task costs (i.e., worse performance in single-task, talking only) are evident in aging. However, speech fluency in discourse appears to be more vulnerable under conditions of distraction in older than younger adults. Across all included studies, significant methodological shortcomings were present. Whereas this literature points to some age-related changes when speaking in more challenging, dual-task contexts, further research is clearly needed on topics such as the types of dual-task contexts that reveal age-related language changes, the role of instructions on task prioritization, and the role of influential participant variables (e.g., cardiovascular risk factors) on dual-task language performance in older adults.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29525795.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4071-4086"},"PeriodicalIF":2.2,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144683962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}