Journal of Speech Language and Hearing Research最新文献

筛选
英文 中文
Linguistic Skills and Text Reading Comprehension in Prelingually Deaf Readers: A Systematic Review.
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2025-03-05 Epub Date: 2025-02-18 DOI: 10.1044/2024_JSLHR-24-00512
Marina Olujić Tomazin, Tomislav Radošević, Iva Hrastinski
{"title":"Linguistic Skills and Text Reading Comprehension in Prelingually Deaf Readers: A Systematic Review.","authors":"Marina Olujić Tomazin, Tomislav Radošević, Iva Hrastinski","doi":"10.1044/2024_JSLHR-24-00512","DOIUrl":"10.1044/2024_JSLHR-24-00512","url":null,"abstract":"<p><strong>Purpose: </strong>Despite the considerable scientific interest in researching the reading skills of the deaf population, most of these studies focus on reading comprehension (RC) at the word or sentence level. Such reading activates different underlying language processes than text-level reading, which is more akin to real-life reading literacy. The results of 36 studies on different linguistic skills and their correlation/prediction with text RC of deaf readers are reviewed, taking into account age and two language modalities (spoken language [SpL] and sign language [SL]).</p><p><strong>Method: </strong>The studies were systematized and analyzed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 (Page et al., 2021).</p><p><strong>Results: </strong>Most reviewed studies (92%) investigated how lexical and phonological skills in SpL relate to RC in deaf people, although there is a lack of studies (33%) investigating the relationship between morphological and syntactic skills in SpL and text-based RC in deaf people. Although results on phonology are quite conflicting, studies of this review consistently confirm that lexical skills are positively related to text RC. Despite only a few published studies on morphological and syntactic skills and RC in deaf readers, the results show strong evidence of their association. This review also provides evidence of a significant cross-modal correlation between SL skills and RC, by showing that in children and adolescents, better phonological skills and receptive vocabulary are associated to better RC, whereas in adults, only studies examining grammatical skills in SL found a significant association with RC in bimodal bilingual deaf readers.</p><p><strong>Conclusions: </strong>Lexical knowledge appears to be the primary contributor to text RC in deaf readers, whereas phonological effects remain inconclusive. Although morphological and syntactic competencies' impact warrants further investigation, they demonstrate consistent association with RC. There is also clear evidence of a positive cross-modal relationship between SL skills and RC.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1277-1310"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143442761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Determining Optimal Talker Variability for Nonnative Speech Training: A Systematic Review and Bayesian Network Meta-Analysis.
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2025-03-05 Epub Date: 2025-02-12 DOI: 10.1044/2024_JSLHR-24-00599
Xiaojuan Zhang, Bing Cheng, Yu Zou, Yang Zhang
{"title":"Determining Optimal Talker Variability for Nonnative Speech Training: A Systematic Review and Bayesian Network Meta-Analysis.","authors":"Xiaojuan Zhang, Bing Cheng, Yu Zou, Yang Zhang","doi":"10.1044/2024_JSLHR-24-00599","DOIUrl":"10.1044/2024_JSLHR-24-00599","url":null,"abstract":"<p><strong>Purpose: </strong>This meta-analysis study aimed to determine the optimal level of talker variability in training to maximize second-language speech learning.</p><p><strong>Method: </strong>We conducted a systematic search for studies comparing different levels of talker variability in nonnative speech training, published through July 2024. Two independent reviewers screened studies for eligibility, extracted data, and assessed the risk of bias. A Bayesian network meta-analysis was implemented to estimate relative effect sizes of different talker variability training conditions and rank these conditions by their posterior probabilities using surface under the cumulative ranking curve (SUCRA) values.</p><p><strong>Results: </strong>A total of 32 studies involving 998 participants were analyzed to compare six training conditions based on the number of talkers. Using a no-training control condition as the reference and excluding the outlier, the random-effects model showed that training with six talkers was most effective (SUCRA = 94%, standardized mean difference [SMD] = 2.09, 95% CrI [1.30, 2.89]), exhibiting moderate between-study heterogeneity (posterior median <i>SD</i> = 0.60, 95% CrI [0.39, 0.90]). However, when considering both the format of talker presentation and training exposure, the conditions with four talkers presented in blocks across training sessions (SUCRA = 77%, SMD = 1.47, 95% CrI [0.92, 2.10]), two talkers intermixed during sessions (SUCRA = 75%, SMD = 1.65, 95% CrI [0.24, 3.03]), and six talkers intermixed (SUCRA = 72%, SMD = 1.38, 95% CrI [0.97, 1.79]), all showed similarly high effectiveness with only minor differences.</p><p><strong>Conclusions: </strong>This systematic review and Bayesian network meta-analysis demonstrate for the first time that optimizing talker variability in nonnative speech training requires a careful balance between the number of talkers and the presentation format. The findings suggest that a moderate level of talker variability is most effective for improving second-language speech training outcomes.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28319345.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1006-1023"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143411557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Interaction Between Vowel Quality and Intensity in Loudness Perception of Short Vowels in Mongolian.
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2025-03-05 Epub Date: 2025-01-29 DOI: 10.1044/2024_JSLHR-24-00366
Bailing Qi, Li Dong
{"title":"The Interaction Between Vowel Quality and Intensity in Loudness Perception of Short Vowels in Mongolian.","authors":"Bailing Qi, Li Dong","doi":"10.1044/2024_JSLHR-24-00366","DOIUrl":"10.1044/2024_JSLHR-24-00366","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the influence of vowel quality on loudness perception and stress judgment in Mongolian, an agglutinative language with free word stress. We aimed to explore the effects of intrinsic vowel features, presentation order, and intensity conditions on loudness perception and stress assignment.</p><p><strong>Method: </strong>Eight Mongolian short vowel phonemes (/ɐ/, /ə/, /i/, /ɪ/, /ɔ/, /o/, /ʊ/, and /u/) were recorded by a native Mongolian speaker of the Urad subdialect (the Chahar dialect group) in Inner Mongolia. The short vowels were paired under different intensity conditions. Native Mongolian listeners from Inner Mongolia participated in two loudness perception experiments: Experiment 1 examined the effects of presentation order and different intensity conditions on loudness perception using pairs of vowels. Experiment 2 explored how different vowel pairs influence perceptual outcomes and identified specific thresholds and perceptual boundaries for loudness perception.</p><p><strong>Results: </strong>The findings revealed that intensity significantly affected loudness perception, modulated by vowel quality. Presentation order of vowels affected loudness perception, and vowel centralization and lip rounding play crucial roles as well. Central vowels, particularly /ə/, were perceived as more prominent, whereas rounded vowels were more likely to be judged as stressed under equated intensity conditions. The study also identified a perceptual tendency toward final prominence, influenced by sonority and vowel positioning.</p><p><strong>Conclusions: </strong>This study highlights the intricate relationship among vowel quality, intensity, and stress perception in Mongolian. Different vowels exhibited distinct loudness perceptions at the same intensity level, emphasizing the importance of vowel quality in stress assignment. Vowels with higher sonority indices or those positioned peripherally in the vowel space are more likely to be perceived as prominent. These findings contribute to a broader understanding of the phonological processes and perceptual mechanisms in agglutinative languages and highlight the need for further research across diverse dialects.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"880-894"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143068547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of an Inclusive Group-Based Naturalistic Developmental Behavioral Intervention on Active Engagement in Young Autistic Children: A Preliminary Study.
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2025-03-05 Epub Date: 2025-01-29 DOI: 10.1044/2024_JSLHR-24-00322
Rachel Reetzke, Rebecca Landa
{"title":"Effects of an Inclusive Group-Based Naturalistic Developmental Behavioral Intervention on Active Engagement in Young Autistic Children: A Preliminary Study.","authors":"Rachel Reetzke, Rebecca Landa","doi":"10.1044/2024_JSLHR-24-00322","DOIUrl":"10.1044/2024_JSLHR-24-00322","url":null,"abstract":"<p><strong>Purpose: </strong>Despite group-level improvements in active engagement and related outcomes, significant individual variability in response to early intervention exists. The purpose of this preliminary study was to examine the effects of a group-based Naturalistic Developmental Behavioral Intervention (NDBI) on active engagement among a heterogeneous sample of young autistic children in a clinical setting.</p><p><strong>Method: </strong>Sixty-three autistic children aged 24-60 months (<i>M</i> = 44.95, <i>SD</i> = 10.77) participated in an inclusive group-based NDBI over a period of 10 months. Speech-language pathologists used an abbreviated version of the measure of active engagement to rate children's active engagement at three treatment time points.</p><p><strong>Results: </strong>Linear mixed-effects regression analyses revealed that active engagement significantly increased from Time 1 to Time 2 (after 6 months of the group-based NDBI) and persisted through Time 3 (after 10 months of the group-based NDBI). Symmetrized percent change analyses revealed that 48% of the sample (<i>n</i> = 30) exhibited an increasing trajectory, 29% were stable, and 24% showed a decreasing trajectory. Age and parent-reported social pragmatic concerns at program entry, as well as the length of time participating in the group-based NDBI, were differentially associated with the identified subgroups, signaling baseline child characteristics that may be associated with NDBI response.</p><p><strong>Conclusion: </strong>These findings highlight the importance of careful monitoring of active engagement to guide clinical decision making regarding changing intervention strategies, targets, or the intensity of the NDBI if gains are not observed.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1137-1150"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143069598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive Use of Grammatical Gender During Noun Phrase Decoding: An Eye-Tracking Study With German Children With Developmental Language Disorder.
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2025-03-05 Epub Date: 2025-02-06 DOI: 10.1044/2024_JSLHR-24-00389
Jürgen Cholewa, Annika Kirschenkern, Frederike Steinke, Thomas Günther
{"title":"Predictive Use of Grammatical Gender During Noun Phrase Decoding: An Eye-Tracking Study With German Children With Developmental Language Disorder.","authors":"Jürgen Cholewa, Annika Kirschenkern, Frederike Steinke, Thomas Günther","doi":"10.1044/2024_JSLHR-24-00389","DOIUrl":"10.1044/2024_JSLHR-24-00389","url":null,"abstract":"<p><strong>Purpose: </strong>Predictive language comprehension has become a major topic in psycholinguistic research. The study described in this article aims to investigate if German children with developmental language disorder (DLD) use grammatical gender agreement to predict the continuation of noun phrases in the same way as it has been observed for typically developing (TD) children. The study also seeks to differentiate between specific and general deficits in predictive processing by exploring the anticipatory use of semantic information. Additionally, the research examines whether the processing of gender and semantic information varies with the speed of stimulus presentation.</p><p><strong>Method: </strong>The study included 30 children with DLD (average age = 8.7 years) and 26 TD children (average age = 8.4 years) who participated in a visual-world eye-tracking study. Noun phrases, consisting of an article, an adjective, and a noun, were presented that matched with only one of two target pictures. The phrases contained a gender cue, a semantic cue, a combination of both, or none of these cues. The cues were provided by the article and/or adjective and could be used to identify the target picture before the noun itself was presented.</p><p><strong>Results: </strong>Both groups, TD children and those with DLD, utilized predictive processing strategies in response to gender agreement and semantic information when decoding noun phrases. However, children with DLD were only able to consider gender cues when noun phrases were presented at a slower speech rate, and even then, their predictive certainty remained below the typical level for their age.</p><p><strong>Conclusion: </strong>Based on these findings, the article discusses the potential relevance of the prediction framework for explaining comprehension deficits in children with DLD, as well as the clinical implications of the results.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1056-1074"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143366677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpreting Pediatric Laryngeal Ultrasonography: A Training Protocol for Novice Examiners.
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2025-03-05 Epub Date: 2025-02-06 DOI: 10.1044/2024_JSLHR-24-00367
Julianne T Lee, Alice K-Y Siu, Estella P-M Ma
{"title":"Interpreting Pediatric Laryngeal Ultrasonography: A Training Protocol for Novice Examiners.","authors":"Julianne T Lee, Alice K-Y Siu, Estella P-M Ma","doi":"10.1044/2024_JSLHR-24-00367","DOIUrl":"10.1044/2024_JSLHR-24-00367","url":null,"abstract":"<p><strong>Objective: </strong>Laryngeal ultrasonography (LUS) is a noninvasive alternative to nasal endoscopy for diagnosing vocal fold pathologies in the pediatric population. Inducing less discomfort and physiological impact, LUS is more well tolerated by young patients. Despite its advantages, interpreting ultrasound images is highly subjective, potentially undermining diagnostic accuracy. To address the limitation, this research aims to evaluate the effect of training on novice examiners' LUS interpretation proficiency and, secondly, whether examiners' interpretation confidence increases after receiving the training.</p><p><strong>Method: </strong>Thirty-eight novice examiners were randomly assigned to the experimental and control group where the former received training. A stimulus-response-feedback-stimulus paradigm was employed in the training. Qualitatively, the presence of vocal fold lesions and vocal fold motion impairment was examined. Quantitatively, the left and right vocal fold-arytenoid angles were measured.</p><p><strong>Results: </strong>Results showed that training significantly improved diagnostic accuracy in qualitative measurements. Quantitatively, statistically significant effects were found posttraining with enhanced intrarater agreement and reduced interrater variability. A substantial increase in interpretation confidence was observed following training.</p><p><strong>Conclusions: </strong>In conclusion, there is an overall significant training effect on novice examiners' proficiency in LUS image interpretation. For future directions, it is recommended to investigate the training effect on the proficiency from ultrasound image acquisition to interpretation.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"935-948"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143366638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Hearing Status Based on Pupil Measures During Sentence Perception.
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2025-03-05 Epub Date: 2025-02-14 DOI: 10.1044/2024_JSLHR-24-00005
Patrycja Lebiecka-Johansen, Adriana A Zekveld, Dorothea Wendt, Thomas Koelewijn, Afaan I Muhammad, Sophia E Kramer
{"title":"Classification of Hearing Status Based on Pupil Measures During Sentence Perception.","authors":"Patrycja Lebiecka-Johansen, Adriana A Zekveld, Dorothea Wendt, Thomas Koelewijn, Afaan I Muhammad, Sophia E Kramer","doi":"10.1044/2024_JSLHR-24-00005","DOIUrl":"10.1044/2024_JSLHR-24-00005","url":null,"abstract":"<p><strong>Purpose: </strong>Speech understanding in noise can be effortful, especially for people with hearing impairment. To compensate for reduced acuity, hearing-impaired (HI) listeners may be allocating listening effort differently than normal-hearing (NH) peers. We expected that this might influence measures derived from the pupil dilation response. To investigate this in more detail, we assessed the sensitivity of pupil measures to hearing-related changes in effort allocation. We used a machine learning-based classification framework capable of combining and ranking measures to examine hearing-related, stimulus-related (signal-to-noise ratio [SNR]), and task response-related changes in pupil measures.</p><p><strong>Method: </strong>Pupil data from 32 NH (40-70 years old, <i>M</i> = 51.3 years, six males) and 32 HI (31-76 years old, <i>M</i> = 59 years, 13 males) listeners were recorded during an adaptive speech reception threshold test. Peak pupil dilation (PPD), mean pupil dilation (MPD), principal pupil components (rotated principal components [RPCs]), and baseline pupil size (BPS) were calculated. As a precondition for ranking pupil measures, the ability to classify hearing status (NH/HI), SNR (high/low), and task response (correct/incorrect) above random prediction level was assessed. This precondition was met when classifying hearing status in subsets of data with varying SNR and task response, SNR in the NH group, and task response in the HI group.</p><p><strong>Results: </strong>A combination of pupil measures was necessary to classify the dependent factors. Hearing status, SNR, and task response were predicted primarily by the established measures-PPD (maximum effort), RPC2 (speech processing), and BPS (task anticipation)-and by the novel measures RPC1 (listening) and RPC3 (response preparation) in tasks involving SNR as an outcome or sometimes difficulty criterion.</p><p><strong>Conclusions: </strong>A machine learning-based classification framework can assess sensitivity of, and rank the importance of, pupil measures in relation to three effort modulators (factors) during speech perception in noise. This indicates that the effects of these factors on the pupil measures allow for reasonable classification performance. Moreover, the varying contributions of each measure to the classification models suggest they are not equally affected by these factors. Thus, this study enhances our understanding of pupil responses and their sensitivity to relevant factors.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28225199.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1188-1208"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Roles of Language Ability and Language Dominance in Bilingual Parent-Child Language Alignment. 语言能力和语言优势在双语亲子语言一致性中的作用。
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2025-03-05 Epub Date: 2025-02-20 DOI: 10.1044/2024_JSLHR-24-00240
Caitlyn Slawny, Emma Libersky, Margarita Kaushanskaya
{"title":"The Roles of Language Ability and Language Dominance in Bilingual Parent-Child Language Alignment.","authors":"Caitlyn Slawny, Emma Libersky, Margarita Kaushanskaya","doi":"10.1044/2024_JSLHR-24-00240","DOIUrl":"10.1044/2024_JSLHR-24-00240","url":null,"abstract":"<p><strong>Purpose: </strong>In the current study, we examined the alignment of language choice of bilingual parent-child dyads in play-based interactions.</p><p><strong>Method: </strong>Forty-four bilingual Spanish-English parent-child dyads participated in a 10-min naturalistic free-play interaction to determine whether bilingual children and their parents respond to each other in the same language(s) across conversational turns and whether children's language ability and children's and parents' language dominance affect language alignment. Children's language ability was indexed by the Bilingual English-Spanish Assessment. Logistic regression was used to test the effects of children's language ability and children's and parents' language dominance on the alignment of language choice.</p><p><strong>Results: </strong>Results revealed that children and parents largely aligned their language choice and that children's and parents' language dominance, but not children's language ability, influenced alignment. Patterns of alignment differed between children and parents. Children aligned to their dominant language, and this was true for both English- and Spanish-dominant children. In contrast, English-dominant parents aligned equally to both languages, whereas Spanish-dominant parents aligned significantly more to Spanish.</p><p><strong>Conclusion: </strong>Together, these findings suggest that bilinguals' alignment of language choice is deeply sensitive to language dominance effects in both children and adults but that parents may also choose their language strategically in conversations with their children.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1092-1104"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143469980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Methodological Stimulus Considerations for Auditory Emotion Recognition Test Design.
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2025-03-05 Epub Date: 2025-02-03 DOI: 10.1044/2024_JSLHR-24-00189
Shae D Morgan, Bailey LaPaugh
{"title":"Methodological Stimulus Considerations for Auditory Emotion Recognition Test Design.","authors":"Shae D Morgan, Bailey LaPaugh","doi":"10.1044/2024_JSLHR-24-00189","DOIUrl":"10.1044/2024_JSLHR-24-00189","url":null,"abstract":"<p><strong>Purpose: </strong>Many studies have investigated test design influences (e.g., number of stimuli, open- vs. closed-set tasks) on word recognition ability, but the impact that stimuli selection has on auditory emotion recognition has not been explored. This study assessed the impact of some stimulus parameters and test design methodologies on emotion recognition performance to optimize stimuli to use for auditory emotion recognition testing.</p><p><strong>Method: </strong>Twenty-five young adult participants with normal or near-normal hearing completed four tasks evaluating methodological parameters that may affect emotion recognition performance. The four conditions assessed (a) word stimuli versus sentence stimuli, (b) the total number of stimuli and number of stimuli per emotion category, (c) the number of talkers, and (d) the number of emotion categories.</p><p><strong>Results: </strong>Sentence stimuli yielded higher emotion recognition performance and increased performance variability compared to word stimuli. Recognition performance was independent of the number of stimuli per category, the number of talkers, and the number of emotion categories. Task duration expectedly increased with the total number of stimuli. A test of auditory emotion recognition that combined these design methodologies yielded high performance with low variability for listeners with normal hearing.</p><p><strong>Conclusions: </strong>Stimulus selection influences performance and test reliability for auditory emotion recognition. Researchers should consider these influences when designing future tests of auditory emotion recognition to ensure tests are able to accomplish the study's aims.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28270943.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1209-1224"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Talker Differences in Perceived Emotion in Clear and Conversational Speech.
IF 2.2 2区 医学
Journal of Speech Language and Hearing Research Pub Date : 2025-03-05 Epub Date: 2025-02-18 DOI: 10.1044/2024_JSLHR-24-00325
Elizabeth D Young, Shae D Morgan, Sarah Hargus Ferguson
{"title":"Talker Differences in Perceived Emotion in Clear and Conversational Speech.","authors":"Elizabeth D Young, Shae D Morgan, Sarah Hargus Ferguson","doi":"10.1044/2024_JSLHR-24-00325","DOIUrl":"10.1044/2024_JSLHR-24-00325","url":null,"abstract":"<p><strong>Purpose: </strong>Previous work has shown that judgments of emotion differ between clear and conversational speech, particularly for perceived anger. The current study examines talker differences in perceived emotion for a database of talkers producing clear and conversational speech.</p><p><strong>Method: </strong>A database of 41 talkers was used to assess talker differences in six emotion categories (\"Anger,\" \"Fear,\" \"Disgust,\" \"Happiness,\" \"Sadness,\" and \"Neutral\"). Twenty-six healthy young adult listeners rated perceived emotion in 14 emotionally neutral sentences produced in clear and conversational styles by all talkers in the database. Generalized linear mixed-effects modeling was utilized to examine talker differences in all six emotion categories.</p><p><strong>Results: </strong>There was a significant effect of speaking style for all emotion categories, and substantial talker differences existed after controlling for speaking style in all categories. Additionally, many emotion categories, including anger, had significant Talker × Style interactions. Perceived anger was significantly higher in clear speech compared to conversational speech for 85% of the talkers.</p><p><strong>Conclusions: </strong>While there is a large speaking style effect for perceived anger, the magnitude of the effect varies between talkers. The perception of negatively valenced emotions in clear speech, including anger, may result in unintended interpersonal consequences for those utilizing clear speech as a communication facilitator. Further research is needed to examine potential acoustic sources of perceived anger in clear speech.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28304384.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1263-1276"},"PeriodicalIF":2.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143442851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信