Kelsey E Davison, Talia Liu, Rebecca M Belisle, Tyler K Perrachione, Zhenghan Qi, John D E Gabrieli, Helen Tager-Flusberg, Jennifer Zuk
{"title":"Right-Hemispheric White Matter Organization Is Associated With Speech Timing in Autistic Children.","authors":"Kelsey E Davison, Talia Liu, Rebecca M Belisle, Tyler K Perrachione, Zhenghan Qi, John D E Gabrieli, Helen Tager-Flusberg, Jennifer Zuk","doi":"10.1044/2025_JSLHR-24-00548","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00548","url":null,"abstract":"<p><strong>Purpose: </strong>Converging research suggests that speech timing, including altered rate and pausing when speaking, can distinguish autistic individuals from nonautistic peers. Although speech timing can impact effective social communication, it remains unclear what mechanisms underlie individual differences in speech timing in autism.</p><p><strong>Method: </strong>The present study examined the organization of speech- and language-related neural pathways in relation to speech timing in autistic and nonautistic children (24 autistic children, 24 nonautistic children [ages: 5-17 years]). Audio recordings from a naturalistic language sampling task (via narrative generation) were transcribed to extract speech timing features (speech rate, pause duration). White matter organization (as indicated by fractional anisotropy [FA]) was estimated for key tracts bilaterally (arcuate fasciculus, superior longitudinal fasciculus [SLF], inferior longitudinal fasciculus [ILF], frontal aslant tract [FAT]).</p><p><strong>Results: </strong>Results indicate associations between speech timing and right-hemispheric white matter organization (FA in the right ILF and FAT) were specific to autistic children and not observed among nonautistic controls. Among nonautistic children, associations with speech timing were specific to the left hemisphere (FA in the left SLF).</p><p><strong>Conclusion: </strong>Overall, these findings enhance understanding of the neural architecture influencing speech timing in autistic children and, thus, carry implications for understanding potential neural mechanisms underlying speech timing differences in autism.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28934432.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-15"},"PeriodicalIF":2.2,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144103018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Fundamental Frequency and Vocal Tract Resonance on Sentence Recognition in Noise.","authors":"Jing Yang, Xianhui Wang, Victoria Costa, Li Xu","doi":"10.1044/2025_JSLHR-24-00758","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00758","url":null,"abstract":"<p><strong>Purpose: </strong>This study examined the effects of change in a talker's sex-related acoustic properties (fundamental frequency [<i>F</i>0] and vocal tract resonance [VTR]) on speech recognition in noise.</p><p><strong>Method: </strong>The stimuli were Hearing in Noise Test sentences, with the <i>F</i>0 and VTR of the original male talker manipulated into four conditions: low <i>F</i>0 and low VTR (L<sub><i>F</i>0</sub>L<sub>VTR</sub>; i.e., the original recordings), low <i>F</i>0 and high VTR (L<sub><i>F</i>0</sub>H<sub>VTR</sub>), high <i>F</i>0 and high VTR (H<sub><i>F</i>0</sub>H<sub>VTR</sub>), and high <i>F</i>0 and low VTR (H<sub><i>F</i>0</sub>L<sub>VTR</sub>). The listeners were 42 English-speaking, normal-hearing adults (21-31 years old). The sentences mixed with speech spectrum-shaped noise at various signal-to-noise ratios (i.e., -10, -5, 0, and +5 dB) were presented to the listeners for recognition.</p><p><strong>Results: </strong>The results revealed no significant differences between the H<sub><i>F</i>0</sub>H<sub>VTR</sub> and L<sub><i>F</i>0</sub>L<sub>VTR</sub> conditions in sentence recognition performance and the estimated speech reception thresholds (SRTs). However, in the H<sub><i>F</i>0</sub>L<sub>VTR</sub> and L<sub><i>F</i>0</sub>H<sub>VTR</sub> conditions, the recognition performance was reduced, and the listeners showed significantly higher SRTs relative to those in the H<sub><i>F</i>0</sub>H<sub>VTR</sub> and L<sub><i>F</i>0</sub>L<sub>VTR</sub> conditions.</p><p><strong>Conclusion: </strong>These findings indicate that male and female voices with matched <i>F</i>0 and VTR (e.g., L<sub><i>F</i>0</sub>L<sub>VTR</sub> and H<sub><i>F</i>0</sub>H<sub>VTR</sub>) yield equivalent speech recognition in noise, whereas voices with mismatched <i>F</i>0 and VTR may reduce intelligibility in noisy environments.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29052305.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-12"},"PeriodicalIF":2.2,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144103013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Erratum to \"Oral Diadochokinetic Performance on Perceptual and Acoustic Measures for Typically Developing Cantonese-Speaking Preschool Children\".","authors":"","doi":"10.1044/2025_JSLHR-25-00161","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-25-00161","url":null,"abstract":"","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1"},"PeriodicalIF":2.2,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David R Moore, Li Lin, Ritu Bhalerao, Jody Caldwell-Kurtzman, Lisa L Hunter
{"title":"Multidisciplinary Clinical Assessment and Interventions for Childhood Listening Difficulty and Auditory Processing Disorder: Relation Between Research Findings and Clinical Practice.","authors":"David R Moore, Li Lin, Ritu Bhalerao, Jody Caldwell-Kurtzman, Lisa L Hunter","doi":"10.1044/2025_JSLHR-24-00306","DOIUrl":"10.1044/2025_JSLHR-24-00306","url":null,"abstract":"<p><strong>Purpose: </strong>Listening difficulty (LiD), often classified as auditory processing disorder (APD), has been studied in both research and clinic settings. The aim of this study was to examine the predictive relation between these two settings. In our SICLiD (Sensitive Indicators of Childhood Listening Difficulties) research study, children with normal audiometry, but caregiver-reported LiD, performed poorly on both listening and cognitive tests. Here, we examined results of clinical assessments and interventions for these children in relation to research performance.</p><p><strong>Method: </strong>Study setting was a tertiary pediatric hospital. Electronic medical records were reviewed for 64 children aged 6-13 years recruited into a SICLiD LiD group based on a caregiver report (Evaluation of Children's Listening and Processing Skill [ECLiPS]). The review focused on clinical assessments and interventions provided by audiology, occupational therapy, psychology (developmental and behavioral pediatrics), and speech-language pathology services, prior to study participation. Descriptive statistics on clinical encounters, identified conditions, and interventions were compared with quantitative, standardized performance on research tests. <i>z</i> scores were compared for participants with and without each clinical condition using univariate and logistic prediction analyses.</p><p><strong>Results: </strong>Overall, 24 clinical categories related to LiD, including APD, were identified. Common conditions were Attention (32%), Language (28%), Hearing (18%), Anxiety (16%), and Autism Spectrum Disorder (6%). Performance on research tests varied significantly between providers, conditions, and interventions. Quantitative research data combined with caregiver reports provided reliable predictions of all clinical conditions except APD. Significant correlations in individual tests were scarce but included the SCAN Composite score, which predicted clinical language and attention difficulties, but not APD diagnoses.</p><p><strong>Conclusions: </strong>The variety of disciplines, assessments, conditions, and interventions revealed here supports previous studies showing that LiD is a multifaceted problem of neurodevelopment. Comparisons between clinical- and research-based assessments suggest a path that prioritizes caregiver reports and selected psychometric tests for screening and diagnostic purposes.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28907780.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-14"},"PeriodicalIF":2.2,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bruna S Mussoi, A'Diva Warren, Jordin Benedict, Serena Sereki, Julia Jones Huyck
{"title":"Contributions of Behavioral and Electrophysiological Spectrotemporal Processing to the Perception of Degraded Speech in Younger and Older Adults.","authors":"Bruna S Mussoi, A'Diva Warren, Jordin Benedict, Serena Sereki, Julia Jones Huyck","doi":"10.1044/2025_JSLHR-24-00667","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00667","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to evaluate (a) the effect of aging on spectral and temporal resolution, as measured both behaviorally and electrophysiologically, and (b) the contributions of spectral and temporal resolution and cognition to speech perception in younger and older adults.</p><p><strong>Method: </strong>Eighteen younger and 18 older listeners with normal hearing or no more than mild-moderate hearing loss participated in this cross-sectional study. Speech recognition was assessed with the QuickSIN test and six-band noise-vocoded sentences. Frequency discrimination, temporal interval discrimination, and gap detection thresholds were obtained using a three-alternative forced-choice task. Cortical auditory evoked potentials were recorded in response to tonal frequency changes and to gaps in noise. Cognitive testing included nonverbal reasoning, vocabulary, working memory, and processing speed.</p><p><strong>Results: </strong>There were age-related declines on many outcome measures, including speech perception in noise, cognition (nonverbal reasoning, processing speed), behavioral gap detection thresholds, and neural correlates of spectral and temporal processing (smaller P1 amplitudes and prolonged P2 latencies in response to frequency change; smaller N1-P2 amplitudes and longer P1, N1, P2 latencies to temporal gaps). Hearing thresholds and neural processing of spectral and temporal information were the main predictors of degraded speech recognition performance, in addition to cognition and perceptual learning. These factors accounted for 58% of the variability on the QuickSIN test and 41% of variability on the noise-vocoded speech.</p><p><strong>Conclusions: </strong>The results confirm and extend previous work demonstrating age-related declines in gap detection, cognition, and neural processing of spectral and temporal features of sounds. Neural measures of spectral and temporal processing were better predictors of speech perception than behavioral ones.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28883711.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-19"},"PeriodicalIF":2.2,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Hu, Libo Qiao, Lei Wu, Guoli Yan, Lihong Wang, Can Xu, Yao Chen, Chang Liu
{"title":"Face Mask Effects on Acoustic Features and Intelligibility of Mandarin Chinese Speech.","authors":"Wei Hu, Libo Qiao, Lei Wu, Guoli Yan, Lihong Wang, Can Xu, Yao Chen, Chang Liu","doi":"10.1044/2025_JSLHR-24-00446","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00446","url":null,"abstract":"<p><strong>Purpose: </strong>The goal of this study was to investigate how face masks influenced the acoustic features of Chinese running speech in both temporal and spectral domains and how the intelligibility of the speech with face masks was affected in quiet and multitalker babbles. The relationship between the acoustic features and speech intelligibility was also examined.</p><p><strong>Method: </strong>In Experiment 1, Mandarin Chinese sentences were recorded by 24 native Mandarin Chinese speakers wearing a surgical mask, a KN95 mask, or not wearing a mask and temporal modulation (TM) depth; speaking rate, spectral tilt, and average value and standard deviation of fundamental frequency (<i>F</i>0) were then examined. In Experiment 2, the intelligibility of these recorded sentences were assessed in quiet and multitalker babble with the signal-to-noise ratios of -2 and -5 dB. To further examine the possible causal relationship between the impacted acoustic variables and speech intelligibility under different mask wearing conditions, the acoustic and speech intelligibility data were analyzed in a stepwise regression.</p><p><strong>Results: </strong>Results showed that both the KN95 and surgical masks produced significantly smaller TM depth compared to the no-mask condition. In terms of speaking rate, participants spoke faster with face masks than without a mask, whereas there was no significant difference between the KN95 and surgical mask. Additionally, spectral tilt was significantly shallower for the two face masks compared to the no-mask condition. Regarding <i>F</i>0, the mean <i>F</i>0 was higher with the KN95 mask than the surgical mask and no mask, while the standard deviation of <i>F</i>0 was lower in the two mask conditions than the no-mask condition, with no significant difference between the two types of masks. In addition to these acoustic differences, speech intelligibility in noise was significantly lower for the two mask conditions than the no-mask condition, with no significant difference between the KN95 and surgical masks, whereas there was no significant effect of face masks on speech intelligibly in quiet. Finally, the relationship between acoustic features and speech intelligibility showed that, under noise conditions, TM depth, spectral tilt, and <i>F</i>0 dynamics (e.g., standard deviation) were significantly correlated with speech intelligibility, while speaking rate and mean <i>F</i>0 were not.</p><p><strong>Conclusions: </strong>Acoustically, face masks led to smaller TM depth, slower speaking rate, shallower spectral tilt, higher mean <i>F</i>0 and smaller standard deviation of <i>F</i>0 in Mandarin Chinese running speech, and perceptually resulted in lower speech intelligibility in noise, but had no impact on speech intelligibility in quiet. Findings also suggest that certain acoustic characteristics (e.g., TM depth and spectral tilt) play important roles on speech intelligibility, especially in challenging listeni","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-16"},"PeriodicalIF":2.2,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Australian Sign Language Lexicons in a Bilingual-Bicultural Program.","authors":"Erin West, Shani Dettman, Colleen Holt","doi":"10.1044/2025_JSLHR-24-00651","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00651","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of the study was to describe the expressive sign vocabularies of a group of children learning Australian Sign Language (Auslan).</p><p><strong>Method: </strong>The spontaneous signs of 44 children aged 3.0-6.8 years enrolled in one early-years bilingual-bicultural educational program were documented using a new approach, the Handshape Analysis Recording Tool, across a 2-year period. The resultant corpus was analyzed to determine the frequency of word classes including nouns, verbs, and adjectives.</p><p><strong>Results: </strong>There were 3,003 Auslan tokens and 806 different sign types. Nouns, adjectives, and verbs were highly represented in this exploratory study, comprising 54.1%, 21.0%, and 15.8% of the entire corpus, respectively. Preliminary analyses indicated differences in the composition of Auslan vocabularies when compared with existing spoken English and American Sign Language data.</p><p><strong>Conclusions: </strong>This exploratory study identified that the types of Auslan word classes used by this heterogeneous group of young learners included a high proportion of nouns and adjectives. While comparisons with past data are stated with caution as the composition of the child sample group was not controlled, there is preliminary support for earlier exposure and focused teaching of Auslan to facilitate the development of more varied expressive sign vocabularies.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-20"},"PeriodicalIF":2.2,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144080556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Delaney E Kelemen, Camden Burnsworth, Charles Chubb, Tracy M Centanni
{"title":"Complex Pitch Perception Deficits in Dyslexia Persist Regardless of Previous Musical Experiences.","authors":"Delaney E Kelemen, Camden Burnsworth, Charles Chubb, Tracy M Centanni","doi":"10.1044/2025_JSLHR-24-00883","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00883","url":null,"abstract":"<p><strong>Purpose: </strong>Pitch perception is important for speech sound learning, and reading acquisition requires integration of speech sounds and written letters. Many individuals with dyslexia exhibit auditory perception deficits that may therefore contribute to their reading impairment given that complex pitch perception is crucial for categorizing speech sounds. Given rising interest in music training as a reading intervention, understanding associations between prior music experiences and pitch perception is important. This study explored the relationship between pitch perception skills and reading ability in young adults with and without dyslexia with various levels of musical experience.</p><p><strong>Method: </strong>Young adults (18-35 years old) with (<i>N</i> = 43) and without (<i>N</i> = 105) dyslexia completed two pitch perception tasks, reading assessments, and a survey reporting formal music training and childhood home music environment (HME).</p><p><strong>Results: </strong>Participants with dyslexia performed worse than typically developing peers on both pitch perception tasks. Single-word reading was related to pitch perception in the typically developing group only. Childhood HME positively correlated with mode categorization and simple pitch discrimination in both groups. Formal music training was associated with performance on both pitch perception tasks in the typically developing group, and simple pitch discrimination in the dyslexia group.</p><p><strong>Conclusions: </strong>Pitch perception deficits may interfere with complex acoustic categorization and persist in some individuals with dyslexia despite prior music experiences. Future research should investigate the link between pitch perception and phonological awareness in dyslexia and assess whether music interventions targeting these skills improve reading.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-14"},"PeriodicalIF":2.2,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144080706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Significance of a Higher Prevalence of ADHD and ADHD Symptoms in Children Who Stutter.","authors":"Bridget Walsh, Seth E Tichenor, Katelyn L Gerwin","doi":"10.1044/2025_JSLHR-24-00668","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00668","url":null,"abstract":"<p><strong>Purpose: </strong>Research suggests that attention-deficit/hyperactivity disorder (ADHD) and its symptoms occur more frequently in individuals who stutter. The purpose of this study was to document the prevalence of ADHD diagnoses and ADHD symptoms in children who stutter and examine potential relationships between ADHD and stuttering characteristics.</p><p><strong>Method: </strong>A total of 204 children between the ages of 5 and 18 years (<i>M</i> = 9.9 years; <i>SD</i> = 3.5 years) and their parents participated in the study. Parents completed the ADHD Rating Scale (ADHD-RS) indexing Inattention and Hyperactivity-Impulsivity symptoms, and children completed the age-appropriate version of the Overall Assessment of the Speaker's Experience of Stuttering assessing the adverse impact of stuttering. Chi-square proportions and Mann-Whitney <i>U</i> tests were used to assess differences in demographic and other variables of interest between children with and without an ADHD diagnosis. Multiple linear regression was used to assess relationships between ADHD symptoms and stuttering characteristics.</p><p><strong>Results: </strong>Parents reported that 17.2% of children who stutter in our sample had been diagnosed with ADHD. Over 40% of children without an ADHD diagnosis had ADHD-RS scores that met the criteria for further evaluation. No significant relationship between ADHD symptoms and stuttering severity was found, but child age and inattention scores significantly, albeit modestly, predicted the adverse impact of stuttering.</p><p><strong>Conclusions: </strong>Researchers and clinicians might be privy to a child's ADHD diagnosis, but they should recognize that many children who stutter without an ADHD diagnosis may exhibit elevated symptoms of inattention and hyperactivity-impulsivity. These symptoms can complicate both research outcomes and the treatment of stuttering.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28899620.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-18"},"PeriodicalIF":2.2,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Integrative Supraglottic Sound Source Taxonomy (SSST) for Pathological Speaking Voice: A Case Series.","authors":"Mathias Aaen, Cathrine Sadolin, Julian McGlashan","doi":"10.1044/2025_JSLHR-24-00475","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00475","url":null,"abstract":"<p><strong>Objectives: </strong>Supraglottic structures can participate in sound source creation when there is either pathological or healthy glottal-level voice production. For pathological voices, supraglottal structures may form part of the overall sound production, be associated with a presenting symptom, or may constitute a substitutional vibratory source. A recent taxonomy in healthy singing populations proposes four dimensions, including distinct phenotyping, vibrational strategies, level of control, and number of vibrating sources to distinguish among a number of supraglottic sound sources in healthy voices, yet differences and similarities between healthy and unhealthy involvement of supraglottic sound sources remain unclear. The purpose of this study was to extend the previously outlined supraglottic sound source taxonomy based in healthy singing populations to pathological voice and develop an integrative supraglottic sound source taxonomy (SSST).</p><p><strong>Method: </strong>A case series of seven patients identified as involving vibrations of supraglottic structures during routine clinical assessment were included and discussed according to the supraglottic sound source taxonomy dimensions. Patients were assessed using stroboscopy, electroglottography, and acoustic measures during sustained vowel tasks and continuous speech tasks at comfortable pitches.</p><p><strong>Results: </strong>Beyond supplementary and substitutional strategies for involving supraglottic sound sources, pathological voices may also recruit supraglottic structures in a compensatory manner allowing for improved vocal fold entrainment. The results suggest that compensatory strategies came in two forms, one for which the pathology necessitating supraglottic sound source involvement is irreversible (e.g., following extensive cordectomy) and one where the pathology is reversible (e.g., following medialization laryngoplasty procedure for unilateral paralysis). Accordingly, supraglottic vibrations can be separated into an integrative taxonomy that outlines supplementary, compensatory, or substitutional functions of vibration with further dimensions related to intentional or unintentional level of control, unisource or multisource number of supraglottic sound sources, and distinction of the involved supraglottic phenotypes. Previously identified distinct phenotypes were determined in the studied population according to the anatomical vibration source, including ventricular fold vibrations, arytenoid against arytenoid vibrations, cuneiform/arytenoid against epiglottis vibrations, and vibrations in the aryepiglottic free edge along with large vocal fold amplitude of vibrations. The study proposes hypotheses as to differences between healthy and pathological use of supraglottic vibrations along dimensions of laryngeal, respiratory, and resonatory technical ability and control.</p><p><strong>Conclusions: </strong>The study presents an integrative SSST including phenotypin","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":"68 5","pages":"2157-2174"},"PeriodicalIF":2.2,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}