Journal of speech, language, and hearing research : JSLHR最新文献

筛选
英文 中文
Poor Spectral Modulation Sensitivity Disrupts Development of Phonological Sensitivity: Evidence From Children With Histories of Chronic Otitis Media. 频谱调制敏感性差破坏语音敏感性的发展:来自慢性中耳炎病史儿童的证据。
IF 2.2
Journal of speech, language, and hearing research : JSLHR Pub Date : 2025-10-14 Epub Date: 2025-10-01 DOI: 10.1044/2025_JSLHR-25-00017
Susan Nittrouer, Heather Starr, Halle Kurit, Thomas Schrepfer
{"title":"Poor Spectral Modulation Sensitivity Disrupts Development of Phonological Sensitivity: Evidence From Children With Histories of Chronic Otitis Media.","authors":"Susan Nittrouer, Heather Starr, Halle Kurit, Thomas Schrepfer","doi":"10.1044/2025_JSLHR-25-00017","DOIUrl":"10.1044/2025_JSLHR-25-00017","url":null,"abstract":"<p><strong>Purpose: </strong>This study tested the hypotheses that (a) sensitivity to spectral modulation has a protracted course of development; (b) its development can be disrupted by diminished auditory experience early in life, as children with chronic otitis media often encounter; and (c) delays in development of spectral modulation sensitivity put children at risk for delays in development of phonological sensitivity, but not vocabulary acquisition.</p><p><strong>Method: </strong>Participants were 22 children with significant, documented histories of otitis media before 3 years of age, 16 children with negative histories of otitis media, and 21 adults. Thresholds of 70.7% were obtained for detection of spectral modulation in signals with low modulation rates (0.5-2.0 cycles per octave) using transformed up-down procedures. Standard scores for vocabulary and percent correct scores for phonological sensitivity were also obtained.</p><p><strong>Results: </strong>The three hypotheses were supported: (a) Even children with no significant histories of otitis media had higher (poorer) spectral modulation detection thresholds than adults; (b) children with significant histories of otitis media had higher spectral modulation detection thresholds than children without those histories; and (c) Spectral modulation detection thresholds were strongly correlated with phonological sensitivity, but not with vocabulary size for children.</p><p><strong>Conclusions: </strong>The central auditory pathways have a protracted developmental course that can be disrupted by temporary hearing loss early in life. This disruption in auditory development has cascading effects on suprathreshold functions, as well as on the language phenomena dependent upon development of those suprathreshold functions. These findings have implications beyond children with histories of otitis media.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"5067-5085"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145208935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Communication in Complex Situations: The Combined Influence of Dysarthria and Sensorineural Hearing Loss on Speech Perception in Everyday Noisy Environments. 复杂情况下的沟通:构音障碍和感音神经性听力损失对日常嘈杂环境下言语感知的综合影响。
IF 2.2
Journal of speech, language, and hearing research : JSLHR Pub Date : 2025-10-14 Epub Date: 2025-09-18 DOI: 10.1044/2025_JSLHR-24-00862
Sarah E Yoho, Eric W Healy, Tyson S Barrett, Stephanie A Borrie
{"title":"Communication in Complex Situations: The Combined Influence of Dysarthria and Sensorineural Hearing Loss on Speech Perception in Everyday Noisy Environments.","authors":"Sarah E Yoho, Eric W Healy, Tyson S Barrett, Stephanie A Borrie","doi":"10.1044/2025_JSLHR-24-00862","DOIUrl":"10.1044/2025_JSLHR-24-00862","url":null,"abstract":"<p><strong>Purpose: </strong>Here, we investigated how intelligibility is impacted in underappreciated, highly complex, but real-world communication scenarios involving two clinical populations-when the speaker has dysarthria and the listener has hearing loss, in noisy everyday environments. As a second aim, we examined the potential for modern noise reduction to mitigate the noise burden when listeners with hearing loss are attempting to understand a speaker with dysarthria.</p><p><strong>Method: </strong>Thirteen adults with sensorineural hearing loss (SNHL) listened and transcribed dysarthric speech under three processing conditions: quiet, noise, and noise reduced. The intelligibility scores of listeners with SNHL were compared with previously reported data collected from adults without hearing loss (Borrie et al., 2023).</p><p><strong>Results: </strong>Listeners with SNHL performed significantly poorer than typical-hearing listeners when listening to speech produced by a speaker with dysarthria-an intelligibility disadvantage that was exacerbated when background noise was present. However, it was also found that a time-frequency-based noise reduction technique was able to effectively restore the intelligibility of dysarthric speech in noise to approximate levels in quiet for listeners with hearing loss.</p><p><strong>Conclusions: </strong>The results highlight the substantial intelligibility burden placed upon a communication dyad consisting of a speaker with dysarthria and a listener with hearing loss, when background noise is present. Given the etiologies of dysarthria and hearing loss, and presence of noise in many everyday communication environments, this scenario is not uncommon. As such, these results are an important first step toward understanding the challenges experienced when communication disorders interact. The finding that noise reduction techniques can mitigate much of the noise burden provides a promising future direction for research that seeks to manage communication with two clinical populations.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4708-4719"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12533689/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145088840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mom, Dad, and Ball: Manner of Articulation Sequences Within Children's Consonant-Vowel-Consonant Words. 妈妈,爸爸和球:儿童辅音-元音-辅音单词的发音顺序方式。
IF 2.2
Journal of speech, language, and hearing research : JSLHR Pub Date : 2025-10-14 Epub Date: 2025-10-02 DOI: 10.1044/2025_JSLHR-24-00811
Barbara L Davis, Katsura Aoyama, K Vest, Leigh A Loewenstein
{"title":"<i>Mom</i>, <i>Dad</i>, and <i>Ball:</i> Manner of Articulation Sequences Within Children's Consonant-Vowel-Consonant Words.","authors":"Barbara L Davis, Katsura Aoyama, K Vest, Leigh A Loewenstein","doi":"10.1044/2025_JSLHR-24-00811","DOIUrl":"10.1044/2025_JSLHR-24-00811","url":null,"abstract":"<p><strong>Purpose: </strong>Previous studies of early speech acquisition have established characteristics of phonemes and syllable structures produced by young children. Fewer studies compared patterns in children's within-word phoneme sequences of the target words with their actual productions. Additionally, studies of consonant sequences are more frequently focused on place of articulation than manner of articulation. This study aims to investigate consonant sequences in manner of articulation within children's actual productions as well as their target sequences.</p><p><strong>Method: </strong>The data were taken from a larger longitudinal study in the English Davis corpus. Consonant sequences in 3,328 tokens of consonant-vowel-consonant (C<sub>1</sub>VC<sub>2</sub>) target word forms from 18 children were analyzed. The data for this study were taken from sessions when the children produced one word at a time (ages range from 0;10 to 2;0 [years;months]). Phoneme sequences within the children's target words and their actual productions of those words were compared to examine consonant manner of articulation in first (C<sub>1</sub>) and second (C<sub>2</sub>) consonants.</p><p><strong>Results: </strong>Approximately 50% of C<sub>1</sub>VC<sub>2</sub> target words contained repeated manner sequences (e.g., stop-stop, <i>dog</i>; nasal-nasal, <i>mine</i>). The other 50% contained variegated manner sequences (e.g., stop-nasal, <i>down</i>, <i>done</i>). When target words contained repeated manner sequences (e.g., stop-stop), children's actual productions matched the target sequence more frequently than when the target words contained variegated sequences (e.g., stop-nasal).</p><p><strong>Conclusions: </strong>Results showed that word-level characteristics (i.e., repeated or variegated sequence) in target words are important for children's success in matching their production to their target sequences during the early period of speech and language development. The same pattern was previously observed for consonant place sequences in C<sub>1</sub>VC<sub>2</sub> words <i>and</i> place and manner sequences in consonant-vowel-consonant-vowel words.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"4688-4707"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12533688/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145215474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatically Calculated Context-Sensitive Features of Connected Speech Improve Prediction of Impairment in Alzheimer's Disease. 连接语音的自动计算上下文敏感特征提高了对阿尔茨海默病损伤的预测。
IF 2.2
Journal of speech, language, and hearing research : JSLHR Pub Date : 2025-10-14 DOI: 10.1044/2025_JSLHR-24-00297
Graham Flick, Rachel Ostrand
{"title":"Automatically Calculated Context-Sensitive Features of Connected Speech Improve Prediction of Impairment in Alzheimer's Disease.","authors":"Graham Flick, Rachel Ostrand","doi":"10.1044/2025_JSLHR-24-00297","DOIUrl":"10.1044/2025_JSLHR-24-00297","url":null,"abstract":"<p><strong>Purpose: </strong>Early detection is critical for effective management of Alzheimer's disease (AD) and other dementias. One promising approach for predicting AD status is to automatically calculate linguistic features from open-ended connected speech. Past work has focused on individual word-level features such as part of speech counts, total word production, and lexical richness, with less emphasis on measuring the relationship between words and the context in which they are produced. Here, we assessed whether linguistic features that take into account where a word was produced in the discourse context improved the ability to predict AD patients' Mini-Mental State Examination (MMSE) scores and classify AD patients from healthy control participants.</p><p><strong>Method: </strong>Seventeen linguistic features were automatically computed from transcriptions of spoken picture descriptions from individuals with probable or possible AD (<i>n</i> = 176 transcripts). This included 12 word-level features (e.g., part of speech counts) and five features capturing contextual word choices (linguistic surprisal, computed from a computational large language model, and properties of words produced following filled pauses). We examined whether (a) the full set jointly predicted MMSE scores, (b) the addition of contextual features improved prediction, and (c) linguistic features could classify AD patients (<i>n</i> = 130) versus healthy participants (<i>n</i> = 93).</p><p><strong>Results: </strong>Linguistic features accurately predicted MMSE scores in individuals with probable or possible AD and successfully identified up to 87% of AD participants versus healthy controls. Statistical models that contained linguistic surprisal (a contextual feature) performed better than those that included only word-level and demographic features. Overall, AD patients with lower MMSE scores produced more empty words, fewer nouns and definite articles, and words that were higher frequency yet more surprising given the previous context.</p><p><strong>Conclusion: </strong>These results provide novel evidence that metrics related to contextualized word choices, particularly the surprisal of an individual's words, capture variance in degree of cognitive decline in AD.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-22"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145294990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hard of Hearing Listeners Show Rollover at Moderate to High Levels for Speech Materials With and Without Semantic Context Information. 重听者在有或没有语义上下文信息的语音材料中表现出中度到高度的侧翻。
IF 2.2
Journal of speech, language, and hearing research : JSLHR Pub Date : 2025-10-14 Epub Date: 2025-09-18 DOI: 10.1044/2025_JSLHR-24-00804
Lukas Jürgensen, Tobias Neher, Michal Fereczkowski
{"title":"Hard of Hearing Listeners Show Rollover at Moderate to High Levels for Speech Materials With and Without Semantic Context Information.","authors":"Lukas Jürgensen, Tobias Neher, Michal Fereczkowski","doi":"10.1044/2025_JSLHR-24-00804","DOIUrl":"10.1044/2025_JSLHR-24-00804","url":null,"abstract":"<p><strong>Purpose: </strong>At low levels, a level increase typically leads to better speech intelligibility (SI) due to more audibility. At high levels, a level increase can lead to poorer SI and, thus, \"rollover.\" In a previous study conducted with listeners with normal audiometric thresholds, we found rollover with sentences without semantic context but not with semantic context, suggesting that context information can \"mask\" rollover. Here, we investigated if equivalent results can be found for listeners with elevated audiometric thresholds.</p><p><strong>Method: </strong>SI scores were measured for two groups of older hard of hearing adults with individual linear amplification. Testing was performed in speech-shaped noise with context-rich and context-free sentences. One group was tested at speech levels of 65 and 75 dB SPL. The other group was tested at a level approximating maximal SI, that is, the individual aided most comfortable level (aMCL) + 10 dB, and at 85 dB SPL. Linear mixed-effects models were used to test for level-dependent changes in SI for the two sentence materials.</p><p><strong>Results: </strong>Rollover occurred for both groups and sentence materials. For the measurements made at 65 and 75 dB SPL, SI decreased by 7.1% for both sentence materials. For the measurements made at aMCL +10 dB and 85 dB SPL, SI decreased by 9.3% for the context-free sentences and by 10.4% for the context-rich sentences.</p><p><strong>Conclusion: </strong>Linearly aided hard of hearing listeners show rollover at moderate to high levels for sentence materials with and without semantic context information.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"5055-5066"},"PeriodicalIF":2.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145088832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On How Vocal Cues Impact Dynamic Credibility Judgments: Mouse-Tracking Paradigm Examining Speaker Confidence and Gender Through Voice Morphing. 关于声音线索如何影响动态可信度判断:通过声音变形检测说话者信心和性别的鼠标跟踪范式。
IF 2.2
Journal of speech, language, and hearing research : JSLHR Pub Date : 2025-10-09 DOI: 10.1044/2025_JSLHR-24-00849
Zhikang Peng, Chaoyi Wang, Xiaoming Jiang
{"title":"On How Vocal Cues Impact Dynamic Credibility Judgments: Mouse-Tracking Paradigm Examining Speaker Confidence and Gender Through Voice Morphing.","authors":"Zhikang Peng, Chaoyi Wang, Xiaoming Jiang","doi":"10.1044/2025_JSLHR-24-00849","DOIUrl":"10.1044/2025_JSLHR-24-00849","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to explore how vocal cues of confidence and gender influence the dynamic mechanisms involved in reasoning about speaker credibility.</p><p><strong>Method: </strong>Using a mouse-tracking paradigm, 52 participants evaluated speaker credibility based on semantically neutral statements that varied in morphed levels of gender (Experiment 1) and confidence (Experiment 2). Participants' mouse trajectories and reaction times were recorded to assess their credibility judgments.</p><p><strong>Results: </strong>The findings revealed that perceived confidence significantly impacted credibility judgments and mouse trajectories, while gender did not. Higher levels of perceived confidence resulted in more credible assessments, demonstrated by direct mouse trajectories and quicker reaction times. Moreover, mouse trajectories reflected cognitive mediation effects between confidence and credibility judgments, indicating that vocal cues influence both the final judgments and the dynamic inference process during speaker credibility assessment.</p><p><strong>Conclusions: </strong>The study highlights the critical role of vocal cues, particularly confidence, in shaping perceptions of speaker credibility. It suggests that these vocal cues not only affect final credibility judgments but also play a significant role in the dynamic reasoning process involved in social inference.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.30265942.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-17"},"PeriodicalIF":2.2,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145260523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Experience of a First Hearing Aid Fitting: Perspectives From Adults With Hearing Loss, Their Relatives, and Hearing Care Professionals. 第一次助听器验配的经验:来自听力损失的成年人、他们的亲属和听力保健专业人员的观点。
IF 2.2
Journal of speech, language, and hearing research : JSLHR Pub Date : 2025-10-08 DOI: 10.1044/2025_JSLHR-25-00162
Katherine Simoneau, Laurie Cormier, Mathilde Lefebvre-Demers, Marc-Olivier Blackburn, Claudia Côté, Normand Boucher, Claire Croteau, Mathieu Hotton
{"title":"The Experience of a First Hearing Aid Fitting: Perspectives From Adults With Hearing Loss, Their Relatives, and Hearing Care Professionals.","authors":"Katherine Simoneau, Laurie Cormier, Mathilde Lefebvre-Demers, Marc-Olivier Blackburn, Claudia Côté, Normand Boucher, Claire Croteau, Mathieu Hotton","doi":"10.1044/2025_JSLHR-25-00162","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-25-00162","url":null,"abstract":"<p><strong>Purpose: </strong>This study is the first step in a project aimed at developing an intervention program for new hearing aid (HA) users and their relatives in the Province of Quebec, Canada. The objectives were to describe the experience of a first HA fitting from the perspective of adults with hearing loss and their relatives, to identify facilitators and barriers to the fitting process, and to identify elements that should be included in an intervention program to support HA adoption and use. Satisfaction regarding HAs and fitting services was also assessed after fitting.</p><p><strong>Method: </strong>A mixed-methods design combining qualitative and quantitative data sources was used. Interviews were conducted with 10 new HA users, seven relatives, and 10 hearing care professionals. HA users also completed a questionnaire to assess their satisfaction with HAs and services after fitting. A qualitative content analysis was done on the data obtained from the interviews, and descriptive statistics were used to analyze data on satisfaction.</p><p><strong>Results: </strong>Identified facilitators and barriers to HA fitting for new users were related to professional services, HAs, relatives, and personal factors. Elements for inclusion in the intervention program were categorized into two groups: information to provide and support to offer. Participants reported a high satisfaction level with HAs (<i>M</i> = 87.6 ± 7.5%).</p><p><strong>Conclusions: </strong>Several factors can influence the success of a first HA fitting, including aspects related to technology, professional services, and psychosocial elements. Participants suggested important components to include in the intervention for first-time fittings. These results will be used to develop an intervention program for new HA users and their relatives.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.30235315.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-16"},"PeriodicalIF":2.2,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Translation and Validation of Leicester Cough Questionnaire in Kannada. 莱斯特咳嗽问卷在卡纳达语的翻译与验证。
IF 2.2
Journal of speech, language, and hearing research : JSLHR Pub Date : 2025-10-08 DOI: 10.1044/2025_JSLHR-25-00233
Yamini Venkatraman, Vishak Acharya, Sindhu Kamath, Dhanshree R Gunjawate, Radish Kumar Balasubramanium
{"title":"Translation and Validation of Leicester Cough Questionnaire in Kannada.","authors":"Yamini Venkatraman, Vishak Acharya, Sindhu Kamath, Dhanshree R Gunjawate, Radish Kumar Balasubramanium","doi":"10.1044/2025_JSLHR-25-00233","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-25-00233","url":null,"abstract":"<p><strong>Purpose: </strong>Leicester Cough Questionnaire (LCQ) is a widely used patient reported outcome measure to profile the impact of cough on an individual's quality of life. It has been translated and validated in many languages but is unavailable in Kannada, a South Indian language. This research focused on translating and validating the LCQ in Kannada among individuals with chronic cough.</p><p><strong>Method: </strong>The LCQ-Kannada was cross-culturally adapted using a rigorous, standard translation procedure and validated in a chronic cough cohort. One hundred fifty-nine participants were enrolled based on eligibility criteria. Participants completed three questionnaires: LCQ-Kannada, Cough Symptom Score (CSS), and Cough Visual Analog Scale (CVAS). The translated questionnaire was evaluated for internal consistency, test-retest reliability, concurrent validity, and responsiveness.</p><p><strong>Results: </strong>The LCQ-Kannada obtained a high overall and domain-specific internal consistency with Cronbach's alpha coefficient values between .75 and .93. The repeatability was tested in 10% of the participants, and significant test-retest reliability scores were obtained (intraclass coefficients: .50-.91). The LCQ-Kannada correlated significantly with CVAS and CSS with coefficient values between .61-.74 and .52-.66, respectively (<i>p</i> < .001). Responsiveness was measured in 26 participants who reported improvement with treatment and had a significant change in LCQ-Kannada scores (mean improvement: 1.74-6.21; <i>p</i> < .001).</p><p><strong>Conclusion: </strong>The LCQ-Kannada is a reliable and valid clinical tool for individuals with chronic cough.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-9"},"PeriodicalIF":2.2,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Contributions of Various Acoustic Features in Cantonese Vocal Emotions. 探讨各种声学特征在粤语声乐情感中的贡献。
IF 2.2
Journal of speech, language, and hearing research : JSLHR Pub Date : 2025-10-08 DOI: 10.1044/2025_JSLHR-24-00677
Dong Han, Yike Yang
{"title":"Exploring the Contributions of Various Acoustic Features in Cantonese Vocal Emotions.","authors":"Dong Han, Yike Yang","doi":"10.1044/2025_JSLHR-24-00677","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00677","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to investigate the acoustic patterns of six emotions and a neutral state in Cantonese speech by focusing on the prosodic modulations that convey emotional content in this tonal language, which has six lexical tones.</p><p><strong>Method: </strong>We employed the extended Geneva minimalistic acoustic parameter set to systematically analyze the acoustic features of 3,474 recordings from the Cantonese Audio-Visual Emotional Speech Database. Linear mixed-effects models were fitted to examine variations in acoustic parameters across emotional states. Decision tree models were used to assess the relative contributions of 22 acoustic parameters in classifying emotions.</p><p><strong>Results: </strong>By fitting linear mixed-effects models, our results revealed statistically significant variations in most of the acoustic parameters across diverse emotional states. The decision tree models showed the relative contributions of 22 acoustic parameters in the classification of emotions, with spectral parameters accounting for 65.45% of the significance in distinguishing all seven emotional states, significantly exceeding other groups of features.</p><p><strong>Conclusions: </strong>Our findings highlight the unique characteristics of emotional expression in Cantonese, in which spectral parameters play a more significant role compared to the frequency-related parameters that are often emphasized in nontonal languages. Our results contribute significantly to understanding vocal emotion expression in tonal languages and are particularly useful for designing emotion-recognition systems and hearing aids that are tailored to tonal language environments. Furthermore, these insights have potential implications for enhancing emotional communication and cognitive training interventions for Cantonese-speaking individuals who use hearing aids or have cochlear implants, are on the autism spectrum, or have Alzheimer's disease.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-12"},"PeriodicalIF":2.2,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auditory Abnormality in Term Neonates With Different Stages of Hypoxic-Ischemic Encephalopathy. 不同阶段缺氧缺血性脑病足月新生儿听觉异常。
IF 2.2
Journal of speech, language, and hearing research : JSLHR Pub Date : 2025-10-08 DOI: 10.1044/2025_JSLHR-25-00071
Ze Dong Jiang, Lili Ping, James Ken Jiang, Cui Wang
{"title":"Auditory Abnormality in Term Neonates With Different Stages of Hypoxic-Ischemic Encephalopathy.","authors":"Ze Dong Jiang, Lili Ping, James Ken Jiang, Cui Wang","doi":"10.1044/2025_JSLHR-25-00071","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-25-00071","url":null,"abstract":"<p><strong>Objective: </strong>This study aims to explore differences in brainstem auditory function shortly after birth between term neonates with different stages of perinatal hypoxic-ischemic encephalopathy (HIE) using brainstem auditory evoked responses.</p><p><strong>Method: </strong>The responses were recorded and analyzed during the first 8 days after birth in term neonates with HIE. The data were compared between the HIE neonates and normal controls and between the neonates with different stages of HIE.</p><p><strong>Results: </strong>Compared to normal controls, neonates with HIE, particularly those with Stage 3 HIE, had a significantly elevated response threshold. The response wave latencies were prolonged, and Wave V amplitude was reduced. The response abnormalities were generally more significant with increasing in the HIE stages. The I-V interval was nearly normal in the neonates with Stage 1 HIE but significantly prolonged in those with Stages 2 and 3 HIE. All wave latencies and I-V interval were significantly longer and Wave V amplitude was smaller in the neonates with Stages 2 or 3 HIE than in those with Stage 1 HIE. The neonates with Stage 3 HIE manifested significantly higher response threshold, significantly longer Waves III and V latencies, and moderately longer I-V interval than those with Stage 2 HIE.</p><p><strong>Conclusions: </strong>Brainstem auditory function is minimally affected by mild HIE but is significantly impaired in moderate and particularly severe HIE during the first 8 days after birth. The impairment tends to be more significant in severe HIE than in moderate HIE. The risk of auditory impairment is significantly increased in moderate and particularly severe HIE.</p>","PeriodicalId":520690,"journal":{"name":"Journal of speech, language, and hearing research : JSLHR","volume":" ","pages":"1-10"},"PeriodicalIF":2.2,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信