{"title":"Mismatch Response to Native and Nonnative Vowels in Czech and Russian: Where's the Phoneme Effect?","authors":"Martina Dvořáková, Kateřina Chládková","doi":"10.1044/2024_JSLHR-23-00788","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-23-00788","url":null,"abstract":"<p><strong>Purpose: </strong>The mismatch negativity (MMN), a neural index of speech sound discrimination, is reportedly stronger for phonemically relevant speech sound differences than for phonemically irrelevant differences, even if the former are acoustically smaller. Some prior studies failed to find language-specific phoneme-dependent modulation of the MMN, and only a handful of early studies tested how the MMN is affected by phoneme status versus acoustic distance. The present study tested whether the phoneme-over-acoustics effect is replicable with new sounds, new languages, and a design that considers the directionality of the contrast.</p><p><strong>Method: </strong>Czech (<i>n</i> = 23) and Russian (<i>n</i> = 24) speakers listened passively to oddball blocks with an acoustically small Czech /i/-/ɪ/ and an acoustically larger Russian /i/-/ɨ/ contrast. MMN was calculated using two attested approaches: from physically identical stimuli across blocks and from physically different stimuli within blocks. Mixed-effects models tested whether the MMN amplitude and latency are affected by vowel contrast in interaction with language background, that is, by the language-specific phoneme status.</p><p><strong>Results: </strong>The analyses failed to detect an interaction of vowel contrast and language background on MMN amplitude. Analyses of Bayes factors indicated a very strong support for the null interaction effect of language background by vowel contrast, that is, absence of the language-specific phoneme effect. Some directionality effects were detected in the within-block analysis.</p><p><strong>Conclusions: </strong>The results point toward a lack of language-specific phoneme effect on the MMN amplitude. Although the literature mostly reports phoneme effects on the MMN, the present lack of an effect is in line with some prior studies that failed to find language-specific phoneme modulations of the MMN. This has implications for language learning and developmental research. Given the occasional lack of language-specific effects in the MMN of healthy adults, one should be cautious when interpreting MMN as an index of language maturation or competence in children or in atypical populations.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":"68 5","pages":"2175-2190"},"PeriodicalIF":2.2,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144063152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seth E Tichenor, Bridget Walsh, Katelyn L Gerwin, J Scott Yaruss
{"title":"Repetitive Negative Thinking as a Mechanism of Stuttering Anticipation.","authors":"Seth E Tichenor, Bridget Walsh, Katelyn L Gerwin, J Scott Yaruss","doi":"10.1044/2025_JSLHR-24-00175","DOIUrl":"10.1044/2025_JSLHR-24-00175","url":null,"abstract":"<p><strong>Purpose: </strong>In the context of stuttering, <i>anticipation</i> refers to the sensation that one may soon stutter. Although anticipation is widely reported, much is still unknown about how the phenomenon develops and how people respond to it as they live their lives. To address these gaps, this study specified the relationship between repetitive negative thinking (RNT), anticipation, and anticipation responses. This study also determined whether individual differences in a person's <i>goal when speaking</i> (i.e., speaking fluently or not stuttering vs. stuttering openly) predicted the different ways people respond to anticipation.</p><p><strong>Method: </strong>Five hundred and ten stutterers (427 adults who stutter, ages 18-86 years, and 83 adolescents who stutter, ages 10-18 years) answered questions about anticipation, their responses to anticipation, how frequently they engage in RNT, and what their <i>goals when speaking</i> are.</p><p><strong>Results: </strong>Exploratory factor analysis revealed that responses to anticipation can be described in terms of two factors: <i>avoidance</i> and <i>acceptance</i>. <i>Avoidance</i> responses to anticipation were more common than <i>acceptance</i> in both groups. Adults and adolescents were more likely to experience anticipation and respond with avoidance behaviors if they more frequently engage in RNT or less often have the <i>goal when speaking</i> of openly stuttering. Data also supported and extended evidence that anticipation is commonly experienced in adolescents and adults who stutter.</p><p><strong>Discussion: </strong>Findings extend the understanding of how anticipation and anticipation responses may develop based on an individuals' engagement with RNT and goals when speaking. The relationship between RNT and anticipation underscores the need for future investigations focusing on preventing the development of negative responses to anticipation via holistic therapy.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28635719.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2236-2258"},"PeriodicalIF":2.2,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143804724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alex Mepham, Sarah Knight, Ronan McGarrigle, Lyndon Rakusen, Sven Mattys
{"title":"Pupillometry Reveals the Role of Signal-to-Noise Ratio in Adaption to Linguistic Interference Over Time.","authors":"Alex Mepham, Sarah Knight, Ronan McGarrigle, Lyndon Rakusen, Sven Mattys","doi":"10.1044/2025_JSLHR-24-00658","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00658","url":null,"abstract":"<p><strong>Purpose: </strong>Studies of speech-in-speech listening show that intelligible maskers are more detrimental to target perception than unintelligible maskers, an effect we refer to as linguistic interference. Research also shows that performance improves over time through adaptation. The extent to which the speed of adaptation differs for intelligible and unintelligible maskers and whether this pattern is reflected in changes in listening effort are open questions.</p><p><strong>Method: </strong>In this preregistered study, native English listeners transcribed English sentences against an intelligible masker (time-forward English talkers) versus an unintelligible masker (time-reversed English talkers). Over 50 trials, transcription accuracy and task-evoked pupil response (TEPR) were recorded, along with self-reported effort and fatigue ratings. In Experiment 1, we used an adaptive procedure to ensure a starting performance of ~50% correct in both conditions. In Experiment 2, we used a fixed signal-to-noise ratio (SNR = -1.5 dB) for both conditions.</p><p><strong>Results: </strong>Both experiments showed performance patterns consistent with linguistic interference. The speed of adaptation depended on the SNR. When the SNR was higher for the intelligible masker condition as a result of the 50% starting performance across conditions (Experiment 1), adaptation was faster for that condition; TEPRs were not affected by trial number or condition. When the SNR was fixed (Experiment 2), adaptation was similar in both conditions, but TEPRs decreased faster in the unintelligible than intelligible masker condition. Self-reported ratings of effort and fatigue were not affected by masker conditions in either experiment.</p><p><strong>Conclusions: </strong>Learning to segregate target speech from maskers depends on both the intelligibility of the maskers and the SNR. We discuss ways in which auditory stream formation is automatic or requires cognitive resources.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":"68 5","pages":"2291-2317"},"PeriodicalIF":2.2,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144051527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Listening Effort Is Difficult to Detect in a Person's Voice: Implications for Audiology Evaluations and Conversation Partners.","authors":"Matthew B Winn, Katherine H Teece","doi":"10.1044/2025_JSLHR-24-00527","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00527","url":null,"abstract":"<p><strong>Purpose: </strong>Listening can be effortful for a variety of reasons, including when a person misperceives a word in a sentence and then mentally repairs it using later context. The current study explored whether an external observer (in the role of a tester/clinician) could detect that effort by hearing the listener's voice as they repeat the sentence.</p><p><strong>Method: </strong>Stimuli were audio recordings of 13 adults with cochlear implants repeating sentences that were either intact or with a masked word that could be inferred/repaired using context (the latter of which were previously documented to elicit greater effort). Participants (<i>n</i> = 171, including 28 audiologists) used a continuous visual analog scale to judge whether the talker heard one type of stimulus or the other. Participants were also surveyed for experiences related to detecting effort or confusion in a talker's voice.</p><p><strong>Results: </strong>Participant judges were unable to discern when the CI users were forced to effortfully infer words from context when repeating a sentence. Ratings indicated a general bias toward assuming the listener heard the original sentence correctly without any need for repair. Acoustic properties of the CI users' voices (hypothesized higher voice pitch and delayed verbal reaction time for stimuli involving repair) did not reliably correlate with ratings of uncertainty. There were also no statistically detectable advantages for audiologists or for people who reported experience or skill in discerning uncertainty in a talker's voice.</p><p><strong>Conclusions: </strong>Despite clear evidence that mental repair incurs extra effort, the process of mental repair gives no reliably perceptible signature in a talker's voice, even for audiologists and others who profess to have experience and skill in conversing with people who have hearing loss. Listening effort is at risk of going unnoticed by conversation partners and by audiologists who might underestimate a patient's effort when listening to speech.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28688012.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":"68 5","pages":"2536-2547"},"PeriodicalIF":2.2,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144005197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Chinese English Learners' Recognition of Foreign-Accented Words: Roles of Sentence Context, Accent Strength, and Second Language Listening Proficiency.","authors":"Jingna Li, Kailun Zhao","doi":"10.1044/2025_JSLHR-23-00564","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-23-00564","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to examine the effects of sentence context, accent strength, and second language (L2) listening proficiency on word recognition accuracy and transcription time among Chinese learners of English for Pakistani-accented English.</p><p><strong>Method: </strong>Speech stimuli included 48 isolated words and 48 highly constraining sentences, each ending with one of the same words. Half of the words and sentences were articulated with a moderate Pakistani accent, while the other half featured a strong accent. Seventy-two participants were assigned to two groups according to their L2 listening proficiency: high and low levels. They completed a word transcription task, first with isolated words and then with sentences, with a 3-day interval between the two tasks.</p><p><strong>Results: </strong>Sentence context significantly influenced word recognition accuracy and transcription time. Participants benefited from sentence context when processing moderately and strongly accented words, although they required more transcription time in the sentence-context condition than in the word-in-isolation condition. The moderate accent yielded significantly higher accuracy and shorter transcription time than the strong accent. L2 listening proficiency significantly influenced word recognition, with high-proficiency participants achieving higher accuracy. However, proficiency did not significantly affect transcription time, although high-proficiency participants performed slightly better than low-proficiency counterparts. Significant two-way interactions among the variables underscored the interplay of factors affecting accented word recognition.</p><p><strong>Conclusion: </strong>Language instructors should integrate diverse contextual cues and consider accent strength in listening materials to improve learners' comprehension skills.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":"68 5","pages":"2517-2535"},"PeriodicalIF":2.2,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144016328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Effectiveness of Linguistic Intervention in Children With Hearing Loss: A Systematic Review and Meta-Analysis.","authors":"Graciela Arráez Vera, Carolina Gonzálvez, Nuria Antón Ros","doi":"10.1044/2025_JSLHR-24-00589","DOIUrl":"10.1044/2025_JSLHR-24-00589","url":null,"abstract":"<p><strong>Purpose: </strong>Vocabulary, grammar, and discourse skills represent distinct dimensions of language ability in young children. Research suggests that individuals with hearing loss often have difficulties with language skills as compared to their hearing counterparts. The aim of this systematic review and meta-analysis is to analyze the effectiveness of linguistic interventions aimed at improving oral discourse in children with hearing loss.</p><p><strong>Method: </strong>A systematic review was conducted according to the PRISMA 2020 statement in five databases. A total of 23 studies were included in the systematic review. From this sample unit, 12 studies were included in the meta-analysis since they had sufficient data for analysis from a meta-analytic approach. Two meta-analyses were performed, one for each dimension of oral discourse skills, differentiating between macrostructure and microstructure and calculating the effects of the intervention and potential moderating variables.</p><p><strong>Results: </strong>The results suggest positive effects of the interventions with effect sizes of <i>d</i> = 1.01 (95% confidence interval [CI; 0.58, 1.45], <i>p</i> < .001) for macrostructure and <i>d</i> = 0.87 (95% CI [0.02, 0.60], <i>p</i> < .001) for microstructure. Moderator variable analyses showed that the number of participants was the only significant factor identified for the microstructure dimension.</p><p><strong>Conclusions: </strong>Linguistic intervention programs improve the language of children with hearing loss. Most of these interventions include therapies that use visual supports and grammar instruction. However, these results should be interpreted with caution given the small number of studies and their high heterogeneity.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":"68 5","pages":"2656-2673"},"PeriodicalIF":2.2,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Does Musical Experience Facilitate Phonetic Accommodation During Human-Robot Interaction?","authors":"Yitian Hong, Si Chen, Han Jiang","doi":"10.1044/2025_JSLHR-24-00495","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00495","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the effect of musical training on phonetic accommodation in a second language (L2) after interacting with a social robot, exploring the motivations and reasons behind their accommodation strategies.</p><p><strong>Method: </strong>Fifteen L2 English speakers with long-term musical training experience (musician group) and 15 speakers without musical training experience (nonmusician group) were recruited to complete four conversational tasks with the social robot Furhat. Their production of a list of key words and carrier sentences was collected before and after conversations and used to quantify their phonetic accommodations. The spectral cues and prosodic cues of the production were extracted and analyzed.</p><p><strong>Results: </strong>Both groups showed similar convergence patterns but different divergence patterns. Specifically, the musician group showed divergence from the robot's production on more prosodic cues (mean fundamental frequency and duration) than the nonmusician group. Both groups converged their vowel formants toward the robot without group differences.</p><p><strong>Conclusions: </strong>The findings reflect individuals' assessment of the robot's speech characteristics and their efforts to enhance communication efficiency, which might indicate a special speech register used for addressing the robot. The finding is more noticeable in the musician group compared to the nonmusician group. We proposed two possible explanations of the effect of musical training on phonetic accommodations: one involves the training of auditory attention and working memory and the other relates to the refinement of phonetic talent in L2 acquisition, contributing to theories on the relationship between music and language. This study also has implications for applying musical training to speech communication training in clinical populations and for designing social robots to better serve as speech therapy partners.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":"68 5","pages":"2259-2274"},"PeriodicalIF":2.2,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143991646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Effect of Dialect and Accent on Digit Perception in Noise in Young Listeners With Normal Hearing.","authors":"Shangqiguo Wang, Lena L N Wong, Xiaoli Shen","doi":"10.1044/2025_JSLHR-24-00472","DOIUrl":"https://doi.org/10.1044/2025_JSLHR-24-00472","url":null,"abstract":"<p><strong>Purpose: </strong>Dialect and accent factors can impact speech-in-noise testing outcomes. This study investigated these effects on the Integrated Digit-in-Noise (iDIN) test among young adults with normal hearing.</p><p><strong>Method: </strong>Dialects involve variations in grammar, vocabulary, and syntax, while accents influence only pronunciation, reflecting geographical or social origins. In Study 1, which examined dialect effects, 33 participants-all native speakers of Mandarin and various Wu dialects except Ningboese-underwent iDIN testing in both Ningboese and Mandarin (as a reference condition). In Study 2, which focused on accent effects, 39 participants-all native speakers of Mandarin and Ningboese, including 19 standard Ningboese and 20 accented Ningboese speakers -underwent iDIN testing in both Mandarin and standard Ningboese at fixed signal-to-noise ratio and adaptive measurements.</p><p><strong>Results: </strong>In Study 1, the results revealed statistically significant differences between the Mandarin and Ningboese iDIN results across all conditions except for the 2-digit sequences. In Study 2, the results showed no significant differences in 3-digit SRTs between standard and accented Ningboese speakers, but a significant difference in 5-digit SRTs.</p><p><strong>Conclusions: </strong>In Mainland China or other regions with high linguistic diversity, accounting for dialect and accent exposure is crucial in evaluating speech recognition, and a 2-digit DIN may be more suitable for valid hearing screening.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":"68 5","pages":"2584-2596"},"PeriodicalIF":2.2,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144063153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Application of Ultrasound Evaluation of Swallowing to the Analysis of Hyoid Kinematics in Healthy Swallows.","authors":"Joan K-Y Ma, Alan A Wrench","doi":"10.1044/2025_JSLHR-24-00663","DOIUrl":"10.1044/2025_JSLHR-24-00663","url":null,"abstract":"<p><strong>Purpose: </strong>Using ultrasound as an adjunct tool for swallowing assessment has gained significant momentum in recent years, with research gaps in areas such as speech and language therapist-driven protocol and measurement methods. This study outlines the recording protocol of Ultrasound Evaluation of Swallowing (USES). Additionally, a set of multidimensional measurements capturing the hyoid kinematics in typical swallows was compared with previous studies to evaluate the current protocol and to develop an ultrasound database of healthy swallows to further the clinical implementation of USES.</p><p><strong>Method: </strong>Swallowing data were acquired from 41 healthy participants. Both discrete swallows (5- and 10-ml) and continuous swallows (100-ml) were analyzed. Automatic tracking of the hyoid and mandible positions using a deep neural net was applied. Six swallowing events of interest were identified for each swallow (beginning hyoid position, maximal hyoid position, hyoid advancement, hyoid retraction, peak forward velocity, and peak backward velocity), and a series of hyoid parameters characterizing the amplitude, velocity, and timing of the movement were calculated and compared across different types of swallows.</p><p><strong>Results: </strong>Results showed significant differences between continuous and discrete swallows. Continuous swallows were characterized by shorter maximal hyoid displacement, a shorter duration between the start of the swallow and the maximal displacement, a shorter total swallow duration, and lower peak velocity in both forward and backward hyoid movement. No significant difference was observed between the 5- and 10-ml swallows in hyoid movement amplitude, velocity, or duration.</p><p><strong>Conclusions: </strong>The quantification of hyoid kinematics in swallowing through the current USES recording protocol, combined with the semi-automatic extraction of hyoid function by applying a deep neural net and feature-finding algorithms, provides initial evidence to support its clinical utility in swallowing assessment. Further studies, including those of different clinical populations, to evaluate the sensitivity of the hyoid metrics in detecting changes to swallowing would support the clinical translation.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2205-2217"},"PeriodicalIF":2.2,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143804727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acoustic Evidence for the Tenseness and Laxity Distinction in Hijazi Arabic: A Pilot Study Using Static and Dynamic Analysis.","authors":"Wael Almurashi","doi":"10.1044/2025_JSLHR-24-00692","DOIUrl":"10.1044/2025_JSLHR-24-00692","url":null,"abstract":"<p><strong>Purpose: </strong>Standard Arabic has a simple three-vowel system with short and long distinctions, specifically /i iː a aː u uː/, traditionally believed to differ solely in duration. However, studies on regional Arabic dialects using a static approach (e.g., measuring formant values at the vowel's midpoint) have suggested that these vowels differ in both quality and quantity. This study aimed to investigate whether Hijazi Arabic (HA) exhibits a tense/lax distinction and, importantly, whether a dynamic analysis (particularly Vowel Inherent Spectral Change) could better capture this distinction, an area relatively underexplored in Arabic acoustic studies.</p><p><strong>Method: </strong>Data were collected from 20 native HA speakers, who produced six HA vowels in various consonantal environments. The first two formant values and vowel duration were automatically extracted. Static formant values were measured at the vowel's midpoint, while dynamic spectral changes were measured at three points during the vowel's duration.</p><p><strong>Results: </strong>The findings revealed a significant distinction between short and long HA vowels, not only in duration but also in their acoustic properties. In the static model, short vowels were more centralized, while long vowels were more peripheral. In the dynamic model, the spectral changes of short vowels differed significantly from those of their long counterparts.</p><p><strong>Conclusions: </strong>These results underscore the existence of a tense/lax distinction in HA, challenging the traditional view that the distinction is based solely on duration. They also highlight the value of dynamic vowel analysis for a comprehensive understanding of vowel behavior in phonological systems.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2191-2204"},"PeriodicalIF":2.2,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143774772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}