Mark Hasegawa-Johnson, Xiuwen Zheng, Heejin Kim, Clarion Mendes, Meg Dickinson, Erik Hege, Chris Zwilling, Marie Moore Channell, Laura Mattie, Heather Hodges, Lorraine Ramig, Mary Bellard, Mike Shebanek, Leda Sarι, Kaustubh Kalgaonkar, David Frerichs, Jeffrey P Bigham, Leah Findlater, Colin Lea, Sarah Herrlinger, Peter Korn, Shadi Abou-Zahra, Rus Heywood, Katrin Tomanek, Bob MacDonald
{"title":"Community-Supported Shared Infrastructure in Support of Speech Accessibility.","authors":"Mark Hasegawa-Johnson, Xiuwen Zheng, Heejin Kim, Clarion Mendes, Meg Dickinson, Erik Hege, Chris Zwilling, Marie Moore Channell, Laura Mattie, Heather Hodges, Lorraine Ramig, Mary Bellard, Mike Shebanek, Leda Sarι, Kaustubh Kalgaonkar, David Frerichs, Jeffrey P Bigham, Leah Findlater, Colin Lea, Sarah Herrlinger, Peter Korn, Shadi Abou-Zahra, Rus Heywood, Katrin Tomanek, Bob MacDonald","doi":"10.1044/2024_JSLHR-24-00122","DOIUrl":"10.1044/2024_JSLHR-24-00122","url":null,"abstract":"<p><strong>Purpose: </strong>The Speech Accessibility Project (SAP) intends to facilitate research and development in automatic speech recognition (ASR) and other machine learning tasks for people with speech disabilities. The purpose of this article is to introduce this project as a resource for researchers, including baseline analysis of the first released data package.</p><p><strong>Method: </strong>The project aims to facilitate ASR research by collecting, curating, and distributing transcribed U.S. English speech from people with speech and/or language disabilities. Participants record speech from their place of residence by connecting their personal computer, cell phone, and assistive devices, if needed, to the SAP web portal. All samples are manually transcribed, and 30 per participant are annotated using differential diagnostic pattern dimensions. For purposes of ASR experiments, the participants have been randomly assigned to a training set, a development set for controlled testing of a trained ASR, and a test set to evaluate ASR error rate.</p><p><strong>Results: </strong>The SAP 2023-10-05 Data Package contains the speech of 211 people with dysarthria as a correlate of Parkinson's disease, and the associated test set contains 42 additional speakers. A baseline ASR, with a word error rate of 3.4% for typical speakers, transcribes test speech with a word error rate of 36.3%. Fine-tuning reduces the word error rate to 23.7%.</p><p><strong>Conclusions: </strong>Preliminary findings suggest that a large corpus of dysarthric and dysphonic speech has the potential to significantly improve speech technology for people with disabilities. By providing these data to researchers, the SAP intends to significantly accelerate research into accessible speech technology.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27078079.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4162-4175"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142332056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial Intelligence in Communication Sciences and Disorders: Introduction to the Forum.","authors":"Jordan R Green","doi":"10.1044/2024_JSLHR-24-00594","DOIUrl":"10.1044/2024_JSLHR-24-00594","url":null,"abstract":"","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4157-4161"},"PeriodicalIF":4.6,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11567088/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Catriona M Steele, Renata Mancopes, Emily Barrett, Vanessa Panes, Melanie Peladeau-Pigeon, Michelle M Simmons, Sana Smaoui
{"title":"Preliminary Exploration of Variations in Measures of Pharyngeal Area During Nonswallowing Tasks.","authors":"Catriona M Steele, Renata Mancopes, Emily Barrett, Vanessa Panes, Melanie Peladeau-Pigeon, Michelle M Simmons, Sana Smaoui","doi":"10.1044/2024_JSLHR-24-00418","DOIUrl":"10.1044/2024_JSLHR-24-00418","url":null,"abstract":"<p><strong>Purpose: </strong>Age- and disease-related changes in oropharyngeal anatomy and physiology may be identified through quantitative videofluoroscopic measures of pharyngeal area and dynamics. Pixel-based measures of nonconstricted pharyngeal area (PhAR) are typically taken during oral bolus hold tasks or on postswallow rest frames. A recent study in 87 healthy adults reported mean postswallow PhAR of 62%(C2-4)<sup>2</sup>, (range: 25%-135%), and significantly larger PhAR in males. The fact that measures were taken after initial bolus swallows without controlling for the presence of subsequent clearing swallows was identified as a potential source of variation. A subset of study participants had completed a protocol including additional static nonswallowing tasks, enabling us to explore variability across those tasks, taking sex differences into account.</p><p><strong>Method: </strong>Videofluoroscopy still shots were analyzed for 20 healthy adults (10 males, 10 females, <i>M</i><sub>age</sub> = 26 years) in head-neutral position, chin-down and chin-up positions, a sustained /a/ vowel vocalization, and oral bolus hold tasks (1-cc, 5-cc). Trained raters used ImageJ software to measure PhAR in %(C2-4)<sup>2</sup> units. Measures were compared to previously reported mean postswallow PhAR for the same participants: (a) explorations of sex differences; (b) pairwise linear mixed-model analyses of variance (ANOVAs) of PhAR for each nonswallowing task versus postswallow measures, controlling for sex; and (c) a combined mixed-model ANOVA to confirm comparability of the subset of tasks showing no significant differences from postswallow measures in Step 2.</p><p><strong>Results: </strong>Overall, PhAR measures were significantly larger in male participants; however, most pairwise task comparisons did not differ by sex. No significant differences from postswallow measures were seen for 5-cc bolus hold, chin-down and chin-up postures, and the second (but not the first) of two repeated head neutral still shots. PhAR during a 5-cc bolus hold was most similar to postswallow measures: mean ± standard deviation of 51 ± 13%(C2-4)<sup>2</sup> in females and 64 ± 16%(C2-4)<sup>2</sup> in males.</p><p><strong>Conclusions: </strong>PhAR is larger in men than in women. Oral bolus hold tasks with a 5-cc liquid bolus yield similar measures to those obtained from postswallow rest frames.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4304-4313"},"PeriodicalIF":4.6,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11567086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Influences of Attentional Focus on Across- and Within-Sentence Variability in Adults Who Do and Do Not Stutter.","authors":"Kim R Bauerly, Eric S Jackson","doi":"10.1044/2024_JSLHR-24-00256","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-24-00256","url":null,"abstract":"<p><strong>Purpose: </strong>Research has found an advantage to maintaining an external attentional focus while speaking as an increase in accuracy and a decrease in across-sentence variability has been found when producing oral-motor and speech tasks. What is not clear is how attention affects articulatory variability both <i>across</i> and <i>within</i> sentences, or how attention affects articulatory control in speakers who stutter. The purpose of this study was to investigate the effects of an internal versus external attention focus on articulatory variability at the sentence level.</p><p><strong>Method: </strong>This study used linear (spatial-temporal index [STI]) and nonlinear (recurrence quantification analysis [RQA]) indices to measure lip aperture variability in 10 adults who stutter (AWS) and 15 adults who do not stutter (ANS) while they repeated sentences under an internal versus external attentional focus, virtual reality task (withVR.app; retrieved December 2023 from https://therapy.withvr.app). Four RQA measures were used to calculate within sentence variability including percent recurrence, percent determinism (%DET), stability (MAXLINE), and stationarity (TREND). Sentence duration measures were also obtained.</p><p><strong>Results: </strong>AWS' movement durations were significantly longer than those of the ANS across conditions, and the AWS were more affected by the attentional focus shifts as their speech rate significantly increased when speaking with an external focus. AWS' speech patterns were also significantly more deterministic (%DET) and stable (MAXLINE) across attentional focus conditions compared to those of the ANS. Both groups showed an effect from attentional shifts as they exhibited less variability (i.e., more consistent) across sentences (STI) and less determinism (%DET) and stability (MAXLINE) within sentences when repeating sentences under an external attentional focus. STI values were not significantly different between the AWS and ANS for the internal or external attentional focus tasks. There were no significant main effects for group or condition for TREND; however, a main effect for sentence type was found.</p><p><strong>Conclusion: </strong>Results suggest that AWS use a more restrictive and less flexible approach to movement and that an external focus fosters more flexibility and thus responsiveness to external factors.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-13"},"PeriodicalIF":2.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Stability of Linguistic Skills of Arabic-Speaking Children Between Kindergarten and First Grade.","authors":"Jasmeen Mansour-Adwan, Asaid Khateb","doi":"10.1044/2024_JSLHR-23-00533","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-23-00533","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to evaluate the stability of phonological awareness (PA) and language achievements between kindergarten and first grade among Arabic-speaking children.</p><p><strong>Method: </strong>A total of 1,158 children were assessed in PA and language skills in both grades and were classified based on distinct and integrated achievements on PA and language following percentiles' cutoff criteria. The classification of distinct achievements constituted high, intermediate, low, and very low achievement-based groups for each domain. The classification of the integrated achievements on both domains constituted four groups: intermediate-high PA and language, very low PA, very low language, and doubly low (very low PA and language). Descriptive statistics and McNemar's tests were used to examine the stability of these groups.</p><p><strong>Results: </strong>The analyses showed a significant improvement in achievements on most tasks. The distinct classification for PA and language indicated that many more kindergarteners in the extreme distribution with high and very low achievement levels maintained this profile in first grade compared to those with intermediate achievements. For PA, 55.7% of kindergarteners with high, 30% with intermediate, 30.4% with low, and 45.5% with very low achievements maintained their achievements in first grade. For language, 52.5% of kindergarteners with high, 34.5% with intermediate, 38.8% with low, and 59.8% with very low achievements maintained their language achievements. The integrated classification indicated a higher achievement stability rate for kindergarteners with intermediate-high PA and language (91.3%) and for doubly low achievers (84.7%) compared to very low PA (24.1%) or very low language (31.8%) achievers.</p><p><strong>Conclusions: </strong>The study indicated a higher variability in the distribution of the intermediate achievements compared to the high and very low achievements, which were more stable across grade. The results emphasize the need for dynamic linguistic assessments and early intervention for children with very low achievements in PA and language who show a poor prognosis.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-16"},"PeriodicalIF":2.2,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Lin, Xiaoqing Ye, Huaiyi Zhang, Fei Xu, Jingyu Zhang, Hongwei Ding, Yang Zhang
{"title":"Category-Sensitive Age-Related Shifts Between Prosodic and Semantic Dominance in Emotion Perception Linked to Cognitive Capacities.","authors":"Yi Lin, Xiaoqing Ye, Huaiyi Zhang, Fei Xu, Jingyu Zhang, Hongwei Ding, Yang Zhang","doi":"10.1044/2024_JSLHR-23-00817","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-23-00817","url":null,"abstract":"<p><strong>Purpose: </strong>Prior research extensively documented challenges in recognizing verbal and nonverbal emotion among older individuals when compared with younger counterparts. However, the nature of these age-related changes remains unclear. The present study investigated how older and younger adults comprehend four basic emotions (i.e., anger, happiness, neutrality, and sadness) conveyed through verbal (semantic) and nonverbal (facial and prosodic) channels.</p><p><strong>Method: </strong>A total of 73 older adults (43 women, <i>M</i><sub>age</sub> = 70.18 years) and 74 younger adults (37 women, <i>M</i><sub>age</sub> = 22.01 years) partook in a fixed-choice test for recognizing emotions presented visually via facial expressions or auditorily through prosody or semantics.</p><p><strong>Results: </strong>The results confirmed age-related decline in recognizing emotions across all channels except for identifying happy facial expressions. Furthermore, the two age groups demonstrated both commonalities and disparities in their inclinations toward specific channels. While both groups displayed a shared dominance of visual facial cues over auditory emotional signals, older adults indicated a preference for semantics, whereas younger adults displayed a preference for prosody in auditory emotion perception. Notably, the dominance effects observed in older adults for visual and semantic cues were less pronounced for sadness and anger compared to other emotions. These challenges in emotion recognition and the shifts in channel preferences among older adults were correlated with their general cognitive capabilities.</p><p><strong>Conclusion: </strong>Together, the findings underscore that age-related obstacles in perceiving emotions and alterations in channel dominance, which vary by emotional category, are significantly intertwined with overall cognitive functioning.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27307251.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-21"},"PeriodicalIF":2.2,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Constrained Emotional Sentence Production in Parkinson's Disease.","authors":"Audrey A Hazamy, Hyejin Park, Lori J P Altmann","doi":"10.1044/2024_JSLHR-23-00566","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-23-00566","url":null,"abstract":"<p><strong>Purpose: </strong>Deficits in the processing and production of emotional cues are well documented in the Parkinson's disease (PD) literature; however, few have ventured to explore how impairments may impact emotional language use in this population, particularly beyond the word level. Emotional language is an important multidimensional manner of communicating one's wants and needs; thus, the current study sought to explore how various aspects of language production may be impacted by the emotionality of a stimulus.</p><p><strong>Method: </strong>Eighteen persons with PD and 22 healthy adults completed a constrained emotional sentence production task in which the affective target word was either a noun or a verb. Output was analyzed for fluency, grammaticality, completeness, and response initiation times. Cognitive (i.e., working memory [WM], inhibition, and switching) and mood (i.e., depression and apathy) measures were examined as factors influencing performance.</p><p><strong>Results: </strong>Individuals with PD produced fewer fluent responses than healthy controls. Furthermore, they had fewer grammatical responses in their production of negative sentences and exhibited reduced information completeness when producing sentences containing positive stimuli. Group differences could not be wholly attributed to individual differences in WM or apathy.</p><p><strong>Conclusions: </strong>Our results support those of others that document language production deficits in individuals with PD above and beyond those impairments that can be explained by the select cognitive abilities explored here. Moreover, the emotionality of the topic may impact various aspects of communicative competence in persons with PD. For instance, disease processes associated with degeneration of neural substrates important for processing negative stimuli may also impact the grammaticality of productions containing negatively valenced content. Thus, it is important to consider how individuals in this population communicate during emotional circumstances.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27289413.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-17"},"PeriodicalIF":2.2,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew D DiSalvo, Silvia S Blemker, Kazlin N Mason
{"title":"A Computational Model Reveals How Varying Muscle Activation in the Lateral Pharyngeal Wall and Soft Palate Differentiates Velopharyngeal Closure Patterns.","authors":"Matthew D DiSalvo, Silvia S Blemker, Kazlin N Mason","doi":"10.1044/2024_JSLHR-24-00353","DOIUrl":"10.1044/2024_JSLHR-24-00353","url":null,"abstract":"<p><strong>Purpose: </strong>Finite element (FE) models have emerged as a powerful method to study biomechanical complexities of velopharyngeal (VP) function. However, existing models have overlooked the active contributions of the lateral pharyngeal wall (LPW) in VP closure. This study aimed to develop and validate a more comprehensive FE model of VP closure to include the superior pharyngeal constrictor (SPC) muscle within the LPW as an active component of VP closure.</p><p><strong>Method: </strong>The geometry of the velum and the lateral and posterior pharyngeal walls with biomechanical activation governed by the levator veli palatini (LVP) and SPC muscles were incorporated into an FE model of VP closure. Differing muscle activations were employed to identify the impact of anatomic contributions from the SPC muscle, LVP muscle, and/or velum for achieving VP closure. The model was validated against normative magnetic resonance imaging data at rest and during speech production.</p><p><strong>Results: </strong>A highly accurate and validated biomechanical model of VP function was developed. Differing combinations and activation of muscles within the LPW and velum provided insight into the relationship between muscle activation and closure patterns, with objective quantification of anatomic change necessary to achieve VP closure.</p><p><strong>Conclusions: </strong>This model is the first to include the anatomic properties and active contributions of the LPW and SPC muscle for achieving VP closure. Now validated, this method can be utilized to build robust, comprehensive models to understand VP dysfunction. This represents an important advancement in patient-specific modeling of VP function and provides a foundation to support development of computational tools to meet clinical demand.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-13"},"PeriodicalIF":4.6,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grace Buckalew, Alexus G Ramirez, Julie M Schneider
{"title":"Maternal Question Use Relates to Syntactic Skills in 5- to 7-Year-Old Children.","authors":"Grace Buckalew, Alexus G Ramirez, Julie M Schneider","doi":"10.1044/2024_JSLHR-23-00426","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-23-00426","url":null,"abstract":"<p><strong>Purpose: </strong>This study examined how mothers' question-asking behavior relates to their child's syntactic skills. One important aspect of maternal question-asking behavior is the use of complex questions when speaking with children. These questions can differ based on both their purpose and structure. The purpose may be to seek out information, to teach, or to get a simple yes/no response. Questions may even be rhetorical, with no answer intended at all. Structurally, questions can include a <i>wh</i>-word (<i>who</i>, <i>what</i>, <i>when</i>, <i>where</i>, <i>why</i>, and <i>how</i>) or not; however, these <i>wh</i>-questions are important because they elicit utterances from the child and support vocabulary development. Despite <i>wh</i>-questions eliciting a response from children, it remains unknown how these questions relate to children's syntactic skills.</p><p><strong>Method: </strong>Thirty-four mother-child dyads participated in a 15-min seminaturalistic play session. Children were between the ages of 5 and 7 years (<i>M</i> = 6.26 years, <i>SD</i> = 1.04 years; 20 girls/14 boys). The Diagnostic Evaluation of Language Variation (DELV) assessment was used to measure syntactic skills in children. Using the Systematic Analysis of Language Transcripts, questions were categorized based on structure (<i>wh</i>-questions vs. non-<i>wh</i>-questions) and purpose (information-seeking, pedagogical, or yes/no and rhetorical questions). A repeated-measures analysis of covariance and a linear regression model were implemented to address the frequency of different questions asked by mothers, as well as what types of questions are most related to children's concurrent syntactic skills.</p><p><strong>Results: </strong>When controlling for total maternal utterances, results revealed that non-<i>wh</i>-questions and rhetorical/yes and no questions were the most frequent types of questions produced by mothers, in terms of structure and purpose, respectively. However, <i>wh</i>-questions were predominantly information-seeking questions. This is important, as the use of information-seeking <i>wh</i>-questions was positively associated with children's syntactic skills, as measured by the DELV, and resulted in children producing longer utterances in response to these questions, as determined by child mean length of utterance in words.</p><p><strong>Conclusion: </strong>Taken together, these findings suggest maternal use of <i>wh</i>-questions aids syntactic skills in children ages 5-7 years, likely because they require a more syntactically complex response on the child's behalf.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27276891.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-14"},"PeriodicalIF":2.2,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R Brynn Jones-Rastelli, Xi Tang, Daphna Harel, Sonja M Molfenter
{"title":"Anterior-Posterior View Acquisition During Videofluoroscopy: A Survey Study Exploring Influential Factors on Speech-Language Pathologists' Practice Patterns.","authors":"R Brynn Jones-Rastelli, Xi Tang, Daphna Harel, Sonja M Molfenter","doi":"10.1044/2024_JSLHR-24-00424","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-24-00424","url":null,"abstract":"<p><strong>Purpose: </strong>This study explored factors influencing speech-language pathologists' (SLPs') decision making surrounding anterior-posterior (AP) view inclusion practices during videofluoroscopic swallowing studies (VFSSs) in the United States.</p><p><strong>Method: </strong>SLPs completing VFSSs were recruited to complete an online anonymous survey. Questions represented six constructs of interest including: (a) clinician demographics, (b) practice patterns, (c) diagnostic perceptions, (d) professional influences, (e) training and education, and (f) logistical facilitators and barriers. Binary logistic regression was used to explore the relationship between construct items and likelihood of AP view inclusion.</p><p><strong>Results: </strong>A total of 136/213 (64%) of respondents reported obtaining an AP view routinely. Facilitators of AP view inclusion were post-acute work setting (<i>OR</i> = 3.40, <i>p</i> = .001); perception that department practices \"probably\" (<i>OR</i> = 5.65, <i>p</i> = .006) or \"definitely\" align (<i>OR</i> = 5.30, <i>p</i> = .006) with evidence-based practice; perception the AP view has \"a lot\" (<i>OR</i> = 4.17, <i>p</i> = .025) or \"a great deal\" (<i>OR</i> = 4.77, <i>p</i> = .028) of diagnostic value; perception that their department is \"definitely\" supportive (<i>OR</i> = 4.69, <i>p</i> = .040); \"moderate\" (<i>OR</i> = 4.75, <i>p</i> = .001) or \"no\" (<i>OR</i> = 7.51, <i>p</i> < .001) equipment limitations; and radiologist support greater than \"extremely unsupportive or resistant\" (\"somewhat unsupportive\" [<i>OR</i> = 5.74, <i>p</i> = .041], \"neutral\" [<i>OR</i> = 11.23, <i>p</i> = .002], \"somewhat supportive\" [<i>OR</i> = 13.92, <i>p</i> = .001], or \"extremely supportive\" [<i>OR</i> = 13.92, <i>p</i> = .001]). Barriers to AP view inclusion were geographic location in the southern U.S. census region (<i>OR</i> = 0.31, <i>p</i> = .007), being \"significantly\" influenced by coworker opinions (<i>OR</i> = 0.13, <i>p</i> = .018), and productivity tracking (<i>OR</i> = 0.21, <i>p</i> = .008).</p><p><strong>Conclusion: </strong>Environmental factors and organizational culture heavily influence AP view inclusion practices.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-23"},"PeriodicalIF":2.2,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}