{"title":"The stability of articulatory and acoustic oscillatory signals derived from speecha).","authors":"Jessica Campbell, Dani Byrd, Louis Goldstein","doi":"10.1121/10.0036389","DOIUrl":"https://doi.org/10.1121/10.0036389","url":null,"abstract":"<p><p>Articulatory underpinnings of periodicities in the speech signal are unclear beyond a general alternation of vocal tract opening and closing. This study evaluates a modulatory articulatory signal that captures instantaneous change in vocal tract posture and its relation with two acoustic oscillatory signals, comparing stabilities to the progression of vowel and stressed vowel onsets. Modulatory signals can be calculated more efficiently than labeling linguistic events. These signals were more stable in periodicity than acoustic vowel onsets and not different from stressed vowel onsets, suggesting that an articulatory modulation function can provide a useful method for indexing foundational periodicities in speech without tedious annotation.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12010241/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144022521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye Yang, Dathan Nguyen, Katherine Chen, Fan-Gang Zeng
{"title":"Evaluating synthesized speech intelligibility in noise.","authors":"Ye Yang, Dathan Nguyen, Katherine Chen, Fan-Gang Zeng","doi":"10.1121/10.0036397","DOIUrl":"https://doi.org/10.1121/10.0036397","url":null,"abstract":"<p><p>Humans can modify their speech to improve intelligibility in noisy environments. With the advancement of speech synthesis technology, machines may also synthesize voices that remain highly intelligible in noise condition. This study evaluates both the subjective and objective intelligibility of synthesized speech in speech-shaped noise from three major speech synthesis platforms. It was found that synthesized voices have a similar intelligibility range to human voices, and some synthesized voices were more intelligible than human voices. It was also found that two modern automatic speech recognition systems recognized 10% more words than human listeners.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jia-Wei Chen, Jia-Hui Li, Yi-Hao Jiang, Yi-Chang Wu, Ying-Hui Lai
{"title":"Enhancing speech intelligibility in optical microphone systems through physics-informed data augmentation.","authors":"Jia-Wei Chen, Jia-Hui Li, Yi-Hao Jiang, Yi-Chang Wu, Ying-Hui Lai","doi":"10.1121/10.0036356","DOIUrl":"10.1121/10.0036356","url":null,"abstract":"<p><p>Laser doppler vibrometers (LDVs) facilitate noncontact speech acquisition; however, they are prone to material-dependent spectral distortions and speckle noise, which degrade intelligibility in noisy environments. This study proposes a data augmentation method that incorporates material-specific and impulse noises to simulate LDV-induced distortions. The proposed approach utilizes a gated convolutional neural network with HiFi-GAN to enhance speech intelligibility across various material and low signal-to-noise ratio (SNR) conditions, achieving a short-time objective intelligibility score of 0.76 at 0 dB SNR. These findings provide valuable insights into optimized augmentation and deep-learning techniques for enhancing LDV-based speech recordings in practical applications.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143766092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perception-production link mediated by position in the imitation of Korean nasal stops.","authors":"Jiwon Hwang, Yu-An Lu","doi":"10.1121/10.0036057","DOIUrl":"10.1121/10.0036057","url":null,"abstract":"<p><p>This study explores how perceptual cues in two positions influence imitation of Korean nasal stops. As a result of initial denasalization, nasality cues are secondary in the initial position but primary in the medial position. Categorization and imitation tasks using CV (consonant-vowel) and VCV (vowel-consonant-vowel) items on a continuum from voiced oral to nasal stops were completed by 32 Korean speakers. Results revealed categorical imitation of nasality medially, whereas imitation was gradient or minimal initially. Furthermore, individuals requiring stronger nasality cues to categorize a nasal sound produced greater nasality in imitation. These findings highlight a perception-production link mediated by positional cue reliance.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial grouping as a method to improve personalized head-related transfer function prediction.","authors":"Keng-Wei Chang, Yih-Liang Shen, Tai-Shih Chi","doi":"10.1121/10.0036032","DOIUrl":"10.1121/10.0036032","url":null,"abstract":"<p><p>The head-related transfer function (HRTF) characterizes the frequency response of the sound traveling path between a specific location and the ear. When it comes to estimating HRTFs by neural network models, angle-specific models greatly outperform global models but demand high computational resources. To balance the computational resource and performance, we propose a method by grouping HRTF data spatially to reduce variance within each subspace. HRTF predicting neural network is then trained for each subspace. Results show the proposed method performs better than global models and angle-specific models by using different grouping strategies at the ipsilateral and contralateral sides.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Variation in the production of nasal coarticulation by speaker age and speech style.","authors":"Georgia Zellou, Michelle Cohn","doi":"10.1121/10.0036227","DOIUrl":"10.1121/10.0036227","url":null,"abstract":"<p><p>This study investigates apparent-time variation in the production of anticipatory nasal coarticulation in California English. Productions of consonant-vowel-nasal words in clear vs casual speech by 58 speakers aged 18-58 (grouped into three generations) were analyzed for degree of coarticulatory vowel nasality. Results reveal an interaction between age and style: the two younger speaker groups produce greater coarticulation (measured as A1-P0) in clear speech, whereas older speakers produce less variable coarticulation across styles. Yet, duration lengthening in clear speech is stable across ages. Thus, age- and style-conditioned changes in produced coarticulation interact as part of change in coarticulation grammars over time.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Holly Bradley, Madeleine E Yu, Elizabeth K Johnson
{"title":"Voice assistant technology continues to underperform on children's speech.","authors":"Holly Bradley, Madeleine E Yu, Elizabeth K Johnson","doi":"10.1121/10.0036052","DOIUrl":"10.1121/10.0036052","url":null,"abstract":"<p><p>Voice assistant (VA) technology is increasingly part of children's everyday lives. But how well do these systems understand children? No study has asked this with children under 5 years old. Here, two versions of Siri, and one of Alexa, were tested on their ability to transcribe utterances produced by 2-, 3-, and 5-year-olds. Human listeners (mothers and undergraduates) were also tested. Results showed that while Siri's performance on children's speech has improved in recent years, even the newest Siri and Alexa models struggle with children's speech. Human listeners far outperformed VA systems with all ages, especially with the youngest children's speech.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bone mineral density and hydroxyapatite alignment in leg cortical bone influence on ultrasound velocity.","authors":"Shuta Kodama, Hiroshi Mita, Norihisa Tamura, Daisuke Koyama, Mami Matsukawa","doi":"10.1121/10.0036082","DOIUrl":"10.1121/10.0036082","url":null,"abstract":"<p><p>Bone diagnosis using x-ray techniques, such as computed tomography and dual-energy x-ray absorptiometry, can evaluate bone mineral density (BMD) and microstructure but does not provide elastic properties. This study investigated the ultrasonic properties of racehorse leg cortical bone, focusing on the relationship between wave velocity, BMD, and hydroxyapatite (HAp) crystallite alignment. The results showed a strong correlation between wave velocity and BMD, suggesting that quantitative ultrasound-obtained wave velocity is primarily influenced by BMD, followed by the HAp alignment direction.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143588552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A method of reference phase velocity selecting for bearing estimation with a horizontal line array in shallow water.","authors":"Dai Liu, Feilong Zhu, Yanjun Zhang, Zhaohui Peng","doi":"10.1121/10.0035934","DOIUrl":"https://doi.org/10.1121/10.0035934","url":null,"abstract":"<p><p>In shallow water environments, choosing an appropriate reference phase velocity for direction-of-arrival estimation with a beamformed underwater horizontal line array is very important. The direction of the maximum beamformer output power will deviate from the true source bearing when a mismatched reference phase velocity was used. This Letter analyzed the intrinsic relationship between the reference phase velocity and normal mode amplitude distribution, source bearing, array aperture, and then proposed a multi-parameter weighted reference phase velocity selection method, which has improved the accuracy of source bearing estimation. Numerical simulation and experimental results validated the effectiveness of this method.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of spatial asymmetry and voice-gender differences between talkers on spatial release from masking in normal-hearing listeners.","authors":"Yonghee Oh, Josephine Kinder, Phillip Friggle, Caroline Cuthbertson","doi":"10.1121/10.0036249","DOIUrl":"10.1121/10.0036249","url":null,"abstract":"<p><p>This study investigated how a listener's spatial release from masking (SRM) performance is affected by spatial asymmetry and voice-gender differences between talkers in multi-talker listening situations. The amounts of SRM were measured with symmetric and asymmetric (toward the right or left) masker configurations in same-gender and different-gender target-masker conditions. The results showed that the SRM was co-varied by talkers' voice-gender differences and spatial asymmetry cues: maximized in the same-gender and asymmetrical target-maskers condition and minimized in the different-gender and symmetrical target-maskers condition. Those findings suggest that the talkers' asymmetry and voice-gender differences could contribute to the variation in SRM independently.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}