{"title":"Choice of communication mode at home and speechreading performance of adolescents with hearing impairment in China.","authors":"Fen Zhang, Jianghua Lei, Huina Gong, Zhenhong Ji, Haifeng Wang, Qin Zhou, Xiaojun Wu, Liang Chen","doi":"10.1080/02699206.2024.2437441","DOIUrl":"10.1080/02699206.2024.2437441","url":null,"abstract":"<p><p>The ability to speechread is often critical for persons with hearing impairment (HI), who may depend on speechreading to access the spoken language and interact with the hearing world. It is not clear, however, whether the primary mode of communication at home will influence speechreading abilities of young adults with HI even when they are enrolled in the same school with the same communication or instructional methods. Thirty-two hearing-impaired adolescents whose parents chose spoken language as the primary mode of communication of the family (the SPOKEN group) and thirty-two hearing-impaired adolescents with sign language as the primary mode of communication of the family (the SIGN group) were administered a Chinese speechreading battery consisting of tests at monosyllabic word, disyllabic word and sentence levels. The SPOKEN group was able to accurately identify significantly more monosyllabic words, disyllabic words, and sentences by speechreading than the SIGN group. In addition, mean accuracy rates of identifying disyllabic words via speechreading were higher than single words and sentences, and identifying sentences via speechreading took longer time than single words and phrases. These results suggest that the differences in speechreading of HI students may result not only from different educational approaches, but also from family language communication experiences, and this difference may exist before the students with HI start formal schooling.</p>","PeriodicalId":49219,"journal":{"name":"Clinical Linguistics & Phonetics","volume":" ","pages":"645-662"},"PeriodicalIF":0.8,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teja Rebernik, Jidde Jacobi, Mark Tiede, Martijn Wieling
{"title":"Using electromagnetic articulography to investigate tongue, lip and jaw tremor associated with Parkinson's disease.","authors":"Teja Rebernik, Jidde Jacobi, Mark Tiede, Martijn Wieling","doi":"10.1080/02699206.2025.2501072","DOIUrl":"https://doi.org/10.1080/02699206.2025.2501072","url":null,"abstract":"<p><p>Tremor in Parkinson's disease is most frequently studied in the limbs, even though it also occurs in the vocal tract. In the current study, we assessed the presence of tongue, lip and jaw tremor in 34 individuals with Parkinson's disease (IwPD) and 25 controls (CS). We used electromagnetic articulography sensors attached to the tongue, the lips, and the jaw to measure orolingual tremor while the participants were performing a series of tasks. Additionally, we acoustically measured frequency and amplitude tremor of the voice in a sustained phonation task. Our findings revealed that IwPD showed significantly more tongue, lip, and jaw tremor than CS. Kinematic tremor frequency and RMS amplitude did not differ between IwPD and CS. We found no group difference in voice tremor prevalence or frequency in our acoustic analysis. While intensity and power indices seemed stronger in IwPD compared to CS, these differences were not significant. We show that electromagnetic articulography is suited for identifying orolingual tremor. While kinematic tremor was more prevalent in IwPD, it also appeared in CS, underlining the importance of including control participants in this type of study.</p>","PeriodicalId":49219,"journal":{"name":"Clinical Linguistics & Phonetics","volume":" ","pages":"1-20"},"PeriodicalIF":0.8,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stuttering in Mandarin-speaking adults.","authors":"Yuting Song, Michael P Robb, Yong Yang, Yang Chen","doi":"10.1080/02699206.2025.2507045","DOIUrl":"10.1080/02699206.2025.2507045","url":null,"abstract":"<p><p>Mandarin, a tonal language, features four distinct lexical tones (T1, T2, T3, T4) and one neutral tone (T0), each with unique pitch variations. This exploratory study examined the relationship between these tones and stuttering in 26 Mandarin-speaking adults. The amount of stuttering that occurred for each type of tone was identified and analysed according to absolute occurrence across tones, as well as the relative occurrence within each type of tone. Significant differences were found in absolute occurrence of stuttering across tones with the neutral tone (T0) showing the lowest stuttering frequency and T3 and T4 the highest. The relative occurrence of stuttering also identified the lowest stuttering for T0; however, the four lexical tones did not significantly differ. The results suggest that a specific type of lexical tone is unlikely to trigger a moment of stutter. Rather, it is the variation in tonal patterns during the speech stream that leads to a disruption in fluency.</p>","PeriodicalId":49219,"journal":{"name":"Clinical Linguistics & Phonetics","volume":" ","pages":"1-13"},"PeriodicalIF":0.8,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144143130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A processability theory perspective on morphosyntax in school-age children with developmental language disorder.","authors":"Gisela Håkansson, Nelli Kalnak","doi":"10.1080/02699206.2025.2499147","DOIUrl":"https://doi.org/10.1080/02699206.2025.2499147","url":null,"abstract":"<p><p>This study examines the production of morphosyntax in Swedish-speaking children diagnosed with Developmental Language Disorder (DLD). Data from a Sentence Repetition Task was used to investigate if there is an implicational order according to Processability Theory (PT) in grammatical structures produced by school-age children with DLD. PT is a cognitive theory of language development that assumes five implicational stages of morphosyntactic development. The analysis was based on a selection of sentences representing the different PT stages. The participants (<i>n</i> = 49; 6;5-11;5 years of age) were recruited from school language units for children with DLD. The results confirm an implicational order: the participants produced structures from a higher stage only if they also produced structures from lower stages. It is suggested that the developmental hierarchy can be used in the intervention of children with DLD by focusing on the next stage. Also, only 26.5% of the participants achieved PT stage 4, and one child (2%) reached the highest PT stage 5. This is discussed in relation to what is known regarding PT stages in typically developing children, as well as associations with language, memory, and non-verbal measures.</p>","PeriodicalId":49219,"journal":{"name":"Clinical Linguistics & Phonetics","volume":" ","pages":"1-14"},"PeriodicalIF":0.8,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144005435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katarina L Haley, Jessica D Richardson, Tyson G Harmon, Peter Turkeltaub, Adam Jacks
{"title":"Phonemic simplification in apraxia of speech and aphasia with phonemic paraphasia.","authors":"Katarina L Haley, Jessica D Richardson, Tyson G Harmon, Peter Turkeltaub, Adam Jacks","doi":"10.1080/02699206.2025.2498437","DOIUrl":"https://doi.org/10.1080/02699206.2025.2498437","url":null,"abstract":"<p><strong>Purpose: </strong>There are varied reports about the extent to which people with apraxia of speech (AOS) simplify the phonemic complexity of utterances they attempt to produce and whether the degree to which they do so might inform differential diagnosis relative to aphasia with phonemic paraphasia (APP). Our study purpose was to determine whether either or both diagnostic groups simplify the phonemic content for words they repeat during a typical motor speech evaluation.</p><p><strong>Method: </strong>195 people with aphasia after stroke were assigned to four diagnostic groups based on quantitative metrics of core speech criteria for AOS and APP. In addition to the target groups, the sample was divided into a borderline group with equivocal feature combinations (BL) and a group with minimal sound production errors (MIN). Monosyllabic, disyllabic, and multisyllabic words were transcribed phonetically and scored for phonemic complexity. The ratio of produced complexity relative to target complexity - the word complexity measure (WCM) ratio - was compared across groups.</p><p><strong>Results: </strong>According to the WCM ratio, participants in all four groups, including the group with minimal speech sound involvement, simplified more productions than they complicated. Those who produced the most speech sound errors also displayed greater phonemic simplification.</p><p><strong>Discussion: </strong>People with stroke-induced aphasia sometimes produce words that are phonemic complications of targets, but more often they simplify the phonemic output. We conclude that phonemic simplification at the word level has limited value for differentiating clinically between AOS and APP. Future research should consider comparing alternative simplification measures.</p>","PeriodicalId":49219,"journal":{"name":"Clinical Linguistics & Phonetics","volume":" ","pages":"1-19"},"PeriodicalIF":0.8,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143992323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Morphosyntactic and lexical features in 5;0-6;0 years old Persian-speaking children with a history of late-talking: A 3 years follow up.","authors":"Seyedeh Fatemeh Ebrahimian, Mozhgan Asadi, Masoomeh Salmani","doi":"10.1080/02699206.2025.2496471","DOIUrl":"https://doi.org/10.1080/02699206.2025.2496471","url":null,"abstract":"<p><p>This longitudinal study compared morphosyntactic and lexical skills in Persian-speaking children aged 5;0-6;0 with a history of late-talking (LT, n=28) and typically developing peers (TD, n=26). Participants, initially assessed at 30 months (31 LT, 32 TD), were matched for age and socioeconomic status. Language skills were evaluated using the Test of Language Development (TOLD), mean length of utterance in morphemes (MLUm), Persian developmental sentence scoring (PDSS), a<sup>2</sup> (Maas), number of total words (NTW), and number of different words (NDW). Results showed that 10 LT children improved (classified as improved LTs) but still scored below TD peers. Improved LTs outperformed unimproved LTs. TD children significantly surpassed both LT groups in morphosyntactic and lexical measures. Stepwise linear regression identified expressive vocabulary size (MCDI-II: Words) and NDW at 30 months as significant predictors of later MLUm and PDSS scores in the combined sample (LT+TD) at 5;0-6;0 years. Despite compensatory progress, LT children remained at the lower end of the normal range, underscoring the need for ongoing monitoring and early intervention during critical developmental periods. Smaller expressive vocabularies at 30 months correlated with persistent delays, highlighting the importance of targeted support for high-risk cases.</p>","PeriodicalId":49219,"journal":{"name":"Clinical Linguistics & Phonetics","volume":" ","pages":"1-20"},"PeriodicalIF":0.8,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144023914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amy Smith, Anja Kuschmann, Eleanor Lawson, Maria Cairney, Joanne Cleland
{"title":"Instrumental articulatory techniques investigating lingual variability in typically developing children: A scoping review.","authors":"Amy Smith, Anja Kuschmann, Eleanor Lawson, Maria Cairney, Joanne Cleland","doi":"10.1080/02699206.2025.2486626","DOIUrl":"https://doi.org/10.1080/02699206.2025.2486626","url":null,"abstract":"<p><p>This scoping review was designed to provide an overview of instrumental articulatory techniques used to investigate lingual variability in typically developing children. Despite extensive research on phonological acquisition, the development of speech motor control in children is less understood. Kinematic studies in this area have focused on children under 10, but adolescents' speech and the attainment of adult-like motor control remains under-researched. This review includes studies using instrumental techniques such as Ultrasound Tongue Imaging (UTI), Electropalatography (EPG) and Electromagnetic Articulography (EMA) to measure spatial and temporal articulatory features using a variety of metrics. Studies show greater articulatory variability in children compared to adults; however, inconsistencies in methodologies and participant samples limit the ability to synthesise findings effectively. Future research should focus on longitudinal studies spanning childhood and adolescence, using techniques that are easily incorporated into clinical practice. A detailed understanding of typical articulatory variability across different age ranges is crucial for identifying speech disorders and improving clinical interventions.</p>","PeriodicalId":49219,"journal":{"name":"Clinical Linguistics & Phonetics","volume":" ","pages":"1-23"},"PeriodicalIF":0.8,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144005438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seung-Yun Yang, Akiko Fuse, Diana Sidtis, Seung Nam Yang
{"title":"Production of different types of familiar expressions by individuals with left- and right-hemisphere damage across discourse elicitation tasks.","authors":"Seung-Yun Yang, Akiko Fuse, Diana Sidtis, Seung Nam Yang","doi":"10.1080/02699206.2025.2485077","DOIUrl":"https://doi.org/10.1080/02699206.2025.2485077","url":null,"abstract":"<p><p>This study aimed to explore the production of familiar expressions (e.g. idioms, proverbs and pause fillers), including different subtypes, and their variation across different types of elicited discourse in individuals with aphasia due to left hemisphere damage (LHD) and those with right hemisphere damage (RHD) to healthy control (HCs). Twenty-nine individuals (12 with LHD, 8 with RHD and 9 hCs) provided elicited discourse samples during four tasks (free speech, picture description, story narrative and procedural tasks) from TalkBank (AphasiaBank and RHDBank). Familiar expressions were categorised into two broad types: nuanced (conveying emotional or attitudinal meaning) and non-nuanced (literal and speech-flow enhancing). Results showed that individuals with LHD produced more familiar expressions, especially nuanced ones, than those with RHD or HCs. A correlation was found between aphasia severity and the production of familiar expressions, with individuals who had more severe language impairments producing a higher proportion of familiar expressions in some tasks. No significant task differences in familiar expression production were observed among the groups. This study revealed that brain damage affects the production of familiar expressions, with individuals with LHD using them more frequently and in a more nuanced manner. In contrast, individuals with RHD had difficulty producing familiar expressions. Clinically, this underscores the importance of considering hemisphere-specific deficits when assessing and treating language impairments in individuals with brain damage, as therapies may need to be tailored to address the distinct challenges faced by individuals with LHD versus RHD.</p>","PeriodicalId":49219,"journal":{"name":"Clinical Linguistics & Phonetics","volume":" ","pages":"1-20"},"PeriodicalIF":0.8,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143796839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Usual and unusual phonological processes in monolingual and bilingual French-speaking children.","authors":"Margaret Kehoe","doi":"10.1080/02699206.2025.2475064","DOIUrl":"https://doi.org/10.1080/02699206.2025.2475064","url":null,"abstract":"<p><p>In the phonological acquisition literature, a distinction is made between usual and unusual phonological processes. Usual processes are present in the speech of young children with typical development (TD), whereas unusual processes are infrequent. Studies, however, have documented unusual processes in the speech of bilingual children. This study examines the frequency of usual and unusual phonological processes in the speech of French-speaking monolingual and bilingual children with TD. Three existing datasets were analysed. Each dataset contained the speech productions of 40 children with a mean age of 2;5-2;6 (for a total number of 78 monolingual and 42 bilingual participants). Two datasets were obtained through picture-naming tasks; one dataset contained spontaneous speech samples. Results indicated that both sets of phonological processes were of low frequency across all children. Only two usual processes, <i>cluster reduction</i> and <i>palatal fronting</i>, were present in 10% or more children in all three datasets. Unusual processes were less frequent than typical processes, although two unusual processes, unusual cluster reduction and palatalisation of /s/ were also present in the speech of 10% or more children in one of the three datasets. There were few differences in the frequency of unusual processes in bilingual versus monolingual children. We provide a tentative list of usual versus unusual phonological processes in French, which may prove useful for clinicians when diagnosing speech sound disorder.</p>","PeriodicalId":49219,"journal":{"name":"Clinical Linguistics & Phonetics","volume":" ","pages":"1-33"},"PeriodicalIF":0.8,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143774813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Differences in nasalance scores obtained with different Nasometer headsets.","authors":"Tim Bressmann, Blanche Hei Yung Tang","doi":"10.1080/02699206.2024.2305118","DOIUrl":"10.1080/02699206.2024.2305118","url":null,"abstract":"<p><p>The goal of the present research study was to investigate possible differences in nasalance scores between different Nasometer headgears. Frequency response characteristics of microphone pairs in a Nasometer model 6200, a model 6450 and two model 6500 headsets were compared using long-term average spectra of white noise and multi-speaker babble signals. Prerecorded sound files from a male and a female speaker were used to record nasalance scores with the four Nasometer headsets and to calculate cumulative absolute differences within and between the headsets. The main outcome measures were the cumulative absolute differences between the decibel (dB) values in the frequency bins from 300 to 750 Hz for the nasal and oral channels of each microphone pair. Cumulative absolute differences between nasalance scores of repeated stimuli within and across Nasometer headsets were tabulated. Results showed that cumulative absolute differences for the frequency range 300-750 Hz were between 6.58 and 7.68 dB. Within headsets, 95.6% to 100% of measurements of all four Nasometer headsets were within 3 nasalance points, although test-retest differences of up to 6 nasalance points were found. Between headsets, 56.1% to 98.9% of measurements were within 3 nasalance points, with the single largest difference of 8 nasalance points. In conclusion, differences between repeated nasalance scores obtained with the same and different headsets were noted. Clinicians should allow a margin of error of ±6 to 8 nasalance points when interpreting scores from different Nasometer headsets.</p>","PeriodicalId":49219,"journal":{"name":"Clinical Linguistics & Phonetics","volume":" ","pages":"504-514"},"PeriodicalIF":0.8,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}