Cassandra Alighieri, Camille De Coster, Kim Bettens, Valerie Pereira
{"title":"Does Generalization Occur Following Speech Therapy? A Study in Children With a Cleft Palate.","authors":"Cassandra Alighieri, Camille De Coster, Kim Bettens, Valerie Pereira","doi":"10.1044/2024_JSLHR-24-00292","DOIUrl":"10.1044/2024_JSLHR-24-00292","url":null,"abstract":"<p><strong>Purpose: </strong>This study compared the occurrence of different types of generalization (within-class, across-class, and total generalization) following motor-phonetic speech therapy and linguistic-phonological speech therapy in children with a cleft palate ± cleft lip (CP ± L).</p><p><strong>Method: </strong>Thirteen children with a CP ± L (<i>M</i><sub>age</sub> = 7.50 years) who previously participated in a block-randomized, sham-controlled design comparing motor-phonetic therapy (<i>n</i> = 7) and linguistic-phonological therapy (<i>n</i> = 6) participated in this study. Speech samples consisting of word imitation and sentence imitation were collected on different data points before and after therapy and perceptually assessed using the Dutch translation of the Cleft Audit Protocol for Speech-Augmented. The percentages within-class, across-class, and total generalization were calculated for the different target consonants. Generalization in the two groups was compared over time using linear mixed models (LMMs).</p><p><strong>Results: </strong>LMM revealed significant Time × Group interactions for the percentage within-class generalization in sentence imitation and total generalization in sentence imitation tasks indicating that these percentages were significantly higher in the group of children who received linguistic-phonological intervention. No Time × Group interactions were found for the percentages across-class generalization.</p><p><strong>Conclusions: </strong>Generalization can occur following both motor-phonetic intervention as well as linguistic-phonological intervention. A linguistic-phonological approach, however, was observed to result in larger percentages of within-class and total generalization scores. As children with a CP ± L often receive yearlong intervention to eliminate cleft-related speech sound errors, these findings on the superior generalization effects of linguistic-phonological intervention are important to consider in clinical practice.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"91-104"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142848142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Concurrent Cognitive Task Alters Postural Control Performance of Young Adults With Unilateral Cochlear Implants.","authors":"Emre Orhan, İsa Tuncay Batuk, Merve Ozbal Batuk","doi":"10.1044/2024_JSLHR-24-00426","DOIUrl":"10.1044/2024_JSLHR-24-00426","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to investigate the balance performances of young adults with unilateral cochlear implants (CIs) in a dual-task condition.</p><p><strong>Method: </strong>Fifteen young adults with unilateral CIs and 15 healthy individuals were included in the study. The balance task was applied using the Sensory Organization Test via Computerized Dynamic Posturography. The Backward Digit Recall task was applied as an additional concurrent cognitive task. In the balance task, participants completed four different conditions, which gradually became more difficult: Condition 1: fixed platform, eyes open; Condition 3: fixed platform, eyes open and visual environment sway; Condition 4: platform sway, eyes open; Condition 6: platform sway, eyes open and visual environment sway. To evaluate the dual-task condition performance, participants were given cognitive and motor tasks simultaneously.</p><p><strong>Results: </strong>Visual (<i>p</i> = .016), vestibular (<i>p</i> < .001), and composite balance scores (<i>p</i> < .001) of CI users were statistically significantly lower than the control group. Condition 3 (<i>p</i> = .003), Condition 4 (<i>p</i> = .007), and Condition 6 (<i>p</i> < .001) balance scores of CI users in the single-task condition were statistically significantly lower than controls. Condition 6 (<i>p</i> < .001) balance scores of CI users in the dual-task condition were statistically significantly lower than the control group. Condition 1 score (<i>p</i> = .002) of the CI users in the dual-task condition showed a statistically significant decrease compared to the balance score in the single-task condition, while the Condition 6 score (<i>p</i> = .011) in the dual-task condition was statistically significantly higher than the balance score in the single-task condition.</p><p><strong>Conclusions: </strong>The balance performance of individuals with CIs in the dual-task condition was worse than typical healthy individuals. It can be suggested that dual-task performances should be included in the vestibular rehabilitation process in CI users in the implantation process in terms of balance abilities in multitasking conditions and risk of falling.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"377-387"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Validation of the Language ENvironment Analysis (LENA) Automated Speech Processing Algorithm Labels for Adult and Child Segments in a Sample of Families From India.","authors":"Shoba S Meera, Divya Swaminathan, Sri Ranjani Venkata Murali, Reny Raju, Malavi Srikar, Sahana Shyam Sundar, Senthil Amudhan, Alejandrina Cristia, Rahul Pawar, Achuth Rao, Prathyusha P Vasuki, Shree Volme, Ashok Mysore","doi":"10.1044/2024_JSLHR-24-00099","DOIUrl":"10.1044/2024_JSLHR-24-00099","url":null,"abstract":"<p><strong>Purpose: </strong>The Language ENvironment Analysis (LENA) technology uses automated speech processing (ASP) algorithms to estimate counts such as total adult words and child vocalizations, which helps understand children's early language environment. This ASP has been validated in North American English and other languages in predominantly monolingual contexts but not in a multilingual context like India. Thus, the current study aims to validate the classification accuracy of the LENA algorithm specifically focusing on speaker recognition of adult segments (AdS) and child segments (ChS) in a sample of bi/multilingual families from India.</p><p><strong>Method: </strong>Thirty neurotypical children between 6 and 24 months (<i>M</i> = 12.89, <i>SD</i> = 4.95) were recruited. Participants were growing up in bi/multilingual environment hearing a combination of Kannada, Tamil, Malayalam, Telugu, Hindi, and/or English. Daylong audio recordings were collected using LENA and processed using the ASP to automatically detect segments across speaker categories. Two human annotators manually annotated ~900 min (37,431 segments across speaker categories). Performance accuracy (recall and precision) was calculated for AdS and ChS.</p><p><strong>Results: </strong>The recall and precision for AdS were 0.62 (95% confidence interval [CI] [0.61, 0.63]) and 0.83 (95% CI [0.8, 0.83]), respectively. This indicated that 62% of the segments identified as AdS by the human annotator were also identified as AdS by the LENA ASP algorithm and 83% of the segments labeled by the LENA ASP as AdS were also labeled by the human annotator as AdS. Similarly, the recall and precision for ChS were 0.65 (95% CI [0.64, 0.66]) and 0.55 (95% CI [0.54, 0.56]), respectively.</p><p><strong>Conclusions: </strong>This study documents the performance of the ASP in correctly classifying speakers as adult or child in a sample of families from India, indicating recall and precision that is relatively low. This study lays the groundwork for future investigations aiming to refine the algorithm models, potentially facilitating more accurate performance in bi/multilingual societies like India.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27910710.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"40-53"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11842061/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer E Markfeld, Zoë Kiemel, Pooja Santapuram, Samantha L Bordman, Grace Pulliam, S Madison Clark, Lauren H Hampton, Bahar Keçeli-Kaysili, Jacob I Feldman, Tiffany G Woynaroski
{"title":"Links Between Early Prelinguistic Communication and Later Expressive Language in Toddlers With Autistic and Non-Autistic Siblings.","authors":"Jennifer E Markfeld, Zoë Kiemel, Pooja Santapuram, Samantha L Bordman, Grace Pulliam, S Madison Clark, Lauren H Hampton, Bahar Keçeli-Kaysili, Jacob I Feldman, Tiffany G Woynaroski","doi":"10.1044/2024_JSLHR-23-00794","DOIUrl":"10.1044/2024_JSLHR-23-00794","url":null,"abstract":"<p><strong>Purpose: </strong>The present study explored the extent to which early prelinguistic communication skills predict expressive language in toddlers with autistic siblings (Sibs-autism), who are known to be at high likelihood for autism and language disorder, and a comparison group of toddlers with non-autistic older siblings (Sibs-NA).</p><p><strong>Method: </strong>Participants were 51 toddlers (29 Sibs-autism, 22 Sibs-NA) aged 12-18 months at the first time point in the study (Time 1). Toddlers were seen again 9 months later (Time 2). Three prelinguistic communication skills (i.e., intentional communication, vocalization complexity, and responding to joint attention) were measured at Time 1 via the Communication and Symbolic Behavior Scales Developmental Profile-Behavior Sample. An expressive language aggregate was calculated for each participant at Time 2. A series of correlation and multiple regression models was run to evaluate associations of interest between prelinguistic communication skills as measured at Time 1 and expressive language as measured at Time 2.</p><p><strong>Results: </strong>Vocalization complexity and intentional communication displayed significant zero-order correlations with expressive language across sibling groups. Vocal complexity and responding to joint attention did not have significant added value in predicting later expressive language, after covarying for intentional communication across groups. However, sibling group moderated the association between vocalization complexity and later expressive language, such that vocal complexity displayed incremental validity for predicting later expressive language, covarying for intentional communication, only within Sibs-NA.</p><p><strong>Conclusions: </strong>Results indicate that prelinguistic communication skills, in particular intentional communication, show promise for predicting later expressive language in siblings of autistic children. These findings provide additional empirical support for the notion that early preemptive interventions targeting prelinguistic communication skills, especially intentional communication, may have the potential to scaffold language acquisition and support more optimal language outcomes in this population at high likelihood for a future diagnosis of both autism and language disorder.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27745437.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"178-192"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11842043/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Margaret K Miller, Vahid Delaram, Allison Trine, Rohit M Ananthanarayana, Emily Buss, Brian B Monson, G Christopher Stecker
{"title":"An Anechoic, High-Fidelity, Multidirectional Speech Corpus.","authors":"Margaret K Miller, Vahid Delaram, Allison Trine, Rohit M Ananthanarayana, Emily Buss, Brian B Monson, G Christopher Stecker","doi":"10.1044/2024_JSLHR-24-00296","DOIUrl":"10.1044/2024_JSLHR-24-00296","url":null,"abstract":"<p><strong>Introduction: </strong>We currently lack speech testing materials faithful to broader aspects of real-world auditory scenes such as speech directivity and extended high frequency (EHF; > 8 kHz) content that have demonstrable effects on speech perception. Here, we describe the development of a multidirectional, high-fidelity speech corpus using multichannel anechoic recordings that can be used for future studies of speech perception in complex environments by diverse listeners.</p><p><strong>Design: </strong>Fifteen male and 15 female talkers (21.3-60.5 years) recorded Bamford-Kowal-Bench (BKB) Standard Sentence Test lists, digits 0-10, and a 2.5-min unscripted narrative. Recordings were made in an anechoic chamber with 17 free-field condenser microphones spanning 0°-180° azimuth angle around the talker using a 48 kHz sampling rate.</p><p><strong>Results: </strong>Recordings resulted in a large corpus containing four BKB lists, 10 digits, and narratives produced by 30 talkers, and an additional 17 BKB lists (21 total) produced by a subset of six talkers.</p><p><strong>Conclusions: </strong>The goal of this study was to create an anechoic, high-fidelity, multidirectional speech corpus using standard speech materials. More naturalistic narratives, useful for the creation of babble noise and speech maskers, were also recorded. A large group of 30 talkers permits testers to select speech materials based on talker characteristics relevant to a specific task. The resulting speech corpus allows for more diverse and precise speech recognition testing, including testing effects of speech directivity and EHF content. Recordings are publicly available.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"411-418"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11842069/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brandon O'Hanlon, Christopher J Plack, Helen E Nuttall
{"title":"Reassessing the Benefits of Audiovisual Integration to Speech Perception and Intelligibility.","authors":"Brandon O'Hanlon, Christopher J Plack, Helen E Nuttall","doi":"10.1044/2024_JSLHR-24-00162","DOIUrl":"10.1044/2024_JSLHR-24-00162","url":null,"abstract":"<p><strong>Purpose: </strong>In difficult listening conditions, the visual system assists with speech perception through lipreading. Stimulus onset asynchrony (SOA) is used to investigate the interaction between the two modalities in speech perception. Previous estimates of audiovisual benefit and SOA integration period differ widely. A limitation of previous research is a lack of consideration of visemes-categories of phonemes defined by similar lip movements when produced by a speaker-to ensure that selected phonemes are visually distinct. This study aimed to reassess the benefits of audiovisual lipreading to speech perception when different viseme categories are selected as stimuli and presented in noise. The study also aimed to investigate the effects of SOA on these stimuli.</p><p><strong>Method: </strong>Sixty participants were tested online and presented with audio-only and audiovisual stimuli containing the speaker's lip movements. The speech was presented either with or without noise and had six different SOAs (0, 200, 216.6, 233.3, 250, and 266.6 ms). Participants discriminated between speech syllables with button presses.</p><p><strong>Results: </strong>The benefit of visual information was weaker than that in previous studies. There was a significant increase in reaction times as SOA was introduced, but there were no significant effects of SOA on accuracy. Furthermore, exploratory analyses suggest that the effect was not equal across viseme categories: \"Ba\" was more difficult to recognize than \"ka\" in noise.</p><p><strong>Conclusion: </strong>In summary, the findings suggest that the contributions of audiovisual integration to speech processing are weaker when considering visemes but are not sufficient to identify a full integration period.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27641064.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"26-39"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11842087/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erin M Picou, Hilary Davis, Leigh Anne Tang, Lisa Bastarache, Anne Marie Tharpe
{"title":"Relationships Between Hearing-Related and Health-Related Variables in Academic Progress of Children With Unilateral Hearing Loss.","authors":"Erin M Picou, Hilary Davis, Leigh Anne Tang, Lisa Bastarache, Anne Marie Tharpe","doi":"10.1044/2024_JSLHR-24-00133","DOIUrl":"10.1044/2024_JSLHR-24-00133","url":null,"abstract":"<p><strong>Purpose: </strong>School-age children with unilateral hearing loss are at an increased risk of exhibiting academic difficulties. Yet, approximately half of children with unilateral hearing loss will not require additional support. There is a dearth of information to assist in determining which of these children will express academic deficits and which will not. The purpose of this study was to identify hearing- and health-related factors that contribute to adverse educational progress in children with permanent unilateral hearing loss. Specific indicators of academic concern identified during school age included the need for specialized academic services, receipt of speech-language therapy, or parent/teacher concerns for academics or speech-language development.</p><p><strong>Method: </strong>This study provides an in-depth analysis of a previously described patient cohort developed from de-identified electronic health records. Factors of interest included potentially relevant hearing-related risk factors (e.g., degree, type, and laterality of hearing loss), in addition to health-related factors that could be extracted from the electronic health records (e.g., sex, premature birth, history of significant otitis media).</p><p><strong>Results: </strong>Being born preterm, having a history of pressure equalization tubes or having conductive or mixed hearing loss more than doubled the risk of demonstrating adverse educational progress. Laterality and degree of loss were generally not significantly related to academic progress.</p><p><strong>Conclusions: </strong>Approximately half of school-age children with permanent unilateral hearing loss in this cohort experienced some academic challenges. Birth history and middle ear pathology were important predictors of adverse educational progress.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"364-376"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11842058/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danika L Pfeiffer, Austin Thompson, Brittany Ciullo, Micah E Hirsch, Mariam El Amin, Andrea Ford, Jessica Riccardi, Elaine Kearney
{"title":"\"1-800-Help-Me-With-Open-Science-Stuff\": A Qualitative Examination of Open Science Practices in Communication Sciences and Disorders.","authors":"Danika L Pfeiffer, Austin Thompson, Brittany Ciullo, Micah E Hirsch, Mariam El Amin, Andrea Ford, Jessica Riccardi, Elaine Kearney","doi":"10.1044/2024_JSLHR-24-00378","DOIUrl":"10.1044/2024_JSLHR-24-00378","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this qualitative study was to examine the perceptions of communication sciences and disorders (CSD) assistant professors in the United States related to barriers and facilitators to engaging in open science practices and identify opportunities for improving open science training and support in the field.</p><p><strong>Method: </strong>Thirty-five assistant professors (16 from very high research activity [R1] institutions, 19 from institutions with other Carnegie classifications) participated in one 1-hr virtual focus group conducted via Zoom recording technology. The researchers used a conventional content analysis approach to analyze the focus group data and develop categories from the discussions.</p><p><strong>Results: </strong>Five categories were developed from the focus groups: (a) a desire to learn about open science through opportunities for independent learning and learning with peers; (b) perceived benefits of engaging in open science on assistant professors' careers, the broader scientific community, and the quality of research in the field of CSD; (c) personal factors that act as barriers and/or facilitators to engaging in open science practices; (d) systemic factors that act as barriers and/or facilitators to engaging in open science practices; and (e) differences in perceptions of R1 and non-R1 assistant professors.</p><p><strong>Conclusions: </strong>Assistant professors in CSD perceive benefits of open science for their careers, the scientific community, and the field. However, they face many barriers (e.g., time, lack of knowledge and training), which impede their engagement in open science practices. Preliminary recommendations for CSD assistant professors, academic institutions, publishers, and funding agencies are provided to reduce barriers to engagement in open science practices.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27996839.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"105-128"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142855985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Veronica Y Kang, Sunyoung Kim, Emily V Gregori, Daniel M Maggin, Jason C Chow, Hongyang Zhao
{"title":"Systematic Review and Meta-Analysis of Enhanced Milieu Teaching.","authors":"Veronica Y Kang, Sunyoung Kim, Emily V Gregori, Daniel M Maggin, Jason C Chow, Hongyang Zhao","doi":"10.1044/2024_JSLHR-24-00260","DOIUrl":"10.1044/2024_JSLHR-24-00260","url":null,"abstract":"<p><strong>Purpose: </strong>Early language intervention is essential for children with indicators of language delay. Enhanced milieu teaching (EMT) is a naturalistic intervention that supports the language development of children with emerging language. We conducted a systematic review and meta-analysis of all qualifying single-case and group design studies that evaluate the experimental effects of EMT on child outcomes.</p><p><strong>Method: </strong>We evaluated the risk of bias in the included studies and conducted a descriptive analysis of study quality, effect sizes, and demographics. We reviewed a total of 29 single-case and 17 group design studies in which 1,590 children participated.</p><p><strong>Results: </strong>Out of 46 studies, 39 met the What Works Clearinghouse standards without reservations, showing low levels of risk of bias. The effects were comparable when EMT was implemented alone and when it was implemented with another intervention component, and EMT was more effective when implemented by caregivers than when implemented by therapists. Most studies did not report sufficient participant demographics.</p><p><strong>Conclusions: </strong>The EMT research literature published thus far is of high study quality; the effects across studies are comparable; and the intervention has been studied via a wide range of delivery modalities, contexts, implementers, and samples. Future research could systematically examine the effects of EMT and explore these varying intervention delivery, implementer, and learner characteristics as moderators.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"259-281"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erik Marsja, Emil Holmer, Victoria Stenbäck, Andreea Micula, Carlos Tirado, Henrik Danielsson, Jerker Rönnberg
{"title":"Fluid Intelligence Partially Mediates the Effect of Working Memory on Speech Recognition in Noise.","authors":"Erik Marsja, Emil Holmer, Victoria Stenbäck, Andreea Micula, Carlos Tirado, Henrik Danielsson, Jerker Rönnberg","doi":"10.1044/2024_JSLHR-24-00465","DOIUrl":"10.1044/2024_JSLHR-24-00465","url":null,"abstract":"<p><strong>Purpose: </strong>Although the existing literature has explored the link between cognitive functioning and speech recognition in noise, the specific role of fluid intelligence still needs to be studied. Given the established association between working memory capacity (WMC) and fluid intelligence and the predictive power of WMC for speech recognition in noise, we aimed to elucidate the mediating role of fluid intelligence.</p><p><strong>Method: </strong>We used data from the n200 study, a longitudinal investigation into aging, hearing ability, and cognitive functioning. We analyzed two age-matched samples: participants with hearing aids and a group with normal hearing. WMC was assessed using the Reading Span task, and fluid intelligence was measured with Raven's Progressive Matrices. Speech recognition in noise was evaluated using Hagerman sentences presented to target 80% speech-reception thresholds in four-talker babble. Data were analyzed using mediation analysis to examine fluid intelligence as a mediator between WMC and speech recognition in noise.</p><p><strong>Results: </strong>We found a partial mediating effect of fluid intelligence on the relationship between WMC and speech recognition in noise, and that hearing status did not moderate this effect. In other words, WMC and fluid intelligence were related, and fluid intelligence partially explained the influence of WMC on speech recognition in noise.</p><p><strong>Conclusions: </strong>This study shows the importance of fluid intelligence in speech recognition in noise, regardless of hearing status. Future research should use other advanced statistical techniques and explore various speech recognition tests and background maskers to deepen our understanding of the interplay between WMC and fluid intelligence in speech recognition.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"399-410"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}