Matthew McKinley, Davis Rider, Laurent Grare, Ganesh Gopalakrishnan, Luc Lenain, Karim G Sabra
{"title":"Spatial observations of low-frequency acoustic propagation near isolated seamounts using an autonomous surface vehicle.","authors":"Matthew McKinley, Davis Rider, Laurent Grare, Ganesh Gopalakrishnan, Luc Lenain, Karim G Sabra","doi":"10.1121/10.0036447","DOIUrl":"https://doi.org/10.1121/10.0036447","url":null,"abstract":"<p><p>This work demonstrates the feasibility of using autonomous surface vehicles equipped with a shallow towed acoustic module (TAM) to survey the spatial variability of low-frequency acoustic propagation across complex bathymetry, such as the Atlantis II seamounts in the Northwest Atlantic. The abrupt seamount topography is found to significantly influence the TAM's recordings of chirp transmissions (500-600 Hz band) from a bottom-moored source ∼30 km from the seamounts by notably causing blockage of in-plane propagation paths and complex reverberation arrivals displaying three-dimensional effects, as confirmed by synthetic aperture beamforming. Ray tracing simulations are compared to these observations based on a data-assimilated ocean model.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143999917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards more accurate sound field verification using directional acoustic filtering.","authors":"Emily Barosin, Kaustubha Raghukumar","doi":"10.1121/10.0036394","DOIUrl":"https://doi.org/10.1121/10.0036394","url":null,"abstract":"<p><p>Attributing omnidirectional sound levels to a specific source in the ocean can be challenging when there are multiple competing sources of sound such as boats, or biological activity. Here, we present a method to directionally filter acoustic measurements based on vector measurements of acoustic pressure and particle velocity. The directional discrimination is applied to estimate sound energy from two marine energy sources: sound generated during the decommissioning of an oil platform and those from an operating tidal energy converter. The application of a directional mask leads to distinctly different spectra and some differences in energy, relative to the unmasked scenarios.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143796883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stan E Dosso, Preston S Wilson, David P Knobles, Julien Bonnel
{"title":"Bayesian matched-field inversion for shear and compressional geoacoustic profiles at the New England Mud Patcha).","authors":"Stan E Dosso, Preston S Wilson, David P Knobles, Julien Bonnel","doi":"10.1121/10.0036374","DOIUrl":"https://doi.org/10.1121/10.0036374","url":null,"abstract":"<p><p>This Letter estimates shear and compressional seabed geoacoustic profiles at the New England Mud Patch through trans-dimensional Bayesian inversion of matched-field acoustic data over a 20-2000 Hz bandwidth. Results indicate low shear-wave speeds (∼35 m/s) with relatively small uncertainties over most of the upper mud layer, increasing in underlying transition and sand layers. Compressional parameters, including attenuation, are also well estimated, but shear-wave attenuation is poorly determined. Comparison of inversions with/without shear parameters and consideration of inter-parameter correlations indicate that estimates of compressional parameters are not substantially influenced by shear effects, with the possible exception of compressional-wave attenuation in the sand layer.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye Yang, Dathan Nguyen, Katherine Chen, Fan-Gang Zeng
{"title":"Evaluating synthesized speech intelligibility in noise.","authors":"Ye Yang, Dathan Nguyen, Katherine Chen, Fan-Gang Zeng","doi":"10.1121/10.0036397","DOIUrl":"https://doi.org/10.1121/10.0036397","url":null,"abstract":"<p><p>Humans can modify their speech to improve intelligibility in noisy environments. With the advancement of speech synthesis technology, machines may also synthesize voices that remain highly intelligible in noise condition. This study evaluates both the subjective and objective intelligibility of synthesized speech in speech-shaped noise from three major speech synthesis platforms. It was found that synthesized voices have a similar intelligibility range to human voices, and some synthesized voices were more intelligible than human voices. It was also found that two modern automatic speech recognition systems recognized 10% more words than human listeners.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The stability of articulatory and acoustic oscillatory signals derived from speecha).","authors":"Jessica Campbell, Dani Byrd, Louis Goldstein","doi":"10.1121/10.0036389","DOIUrl":"https://doi.org/10.1121/10.0036389","url":null,"abstract":"<p><p>Articulatory underpinnings of periodicities in the speech signal are unclear beyond a general alternation of vocal tract opening and closing. This study evaluates a modulatory articulatory signal that captures instantaneous change in vocal tract posture and its relation with two acoustic oscillatory signals, comparing stabilities to the progression of vowel and stressed vowel onsets. Modulatory signals can be calculated more efficiently than labeling linguistic events. These signals were more stable in periodicity than acoustic vowel onsets and not different from stressed vowel onsets, suggesting that an articulatory modulation function can provide a useful method for indexing foundational periodicities in speech without tedious annotation.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12010241/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144022521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jia-Wei Chen, Jia-Hui Li, Yi-Hao Jiang, Yi-Chang Wu, Ying-Hui Lai
{"title":"Enhancing speech intelligibility in optical microphone systems through physics-informed data augmentation.","authors":"Jia-Wei Chen, Jia-Hui Li, Yi-Hao Jiang, Yi-Chang Wu, Ying-Hui Lai","doi":"10.1121/10.0036356","DOIUrl":"10.1121/10.0036356","url":null,"abstract":"<p><p>Laser doppler vibrometers (LDVs) facilitate noncontact speech acquisition; however, they are prone to material-dependent spectral distortions and speckle noise, which degrade intelligibility in noisy environments. This study proposes a data augmentation method that incorporates material-specific and impulse noises to simulate LDV-induced distortions. The proposed approach utilizes a gated convolutional neural network with HiFi-GAN to enhance speech intelligibility across various material and low signal-to-noise ratio (SNR) conditions, achieving a short-time objective intelligibility score of 0.76 at 0 dB SNR. These findings provide valuable insights into optimized augmentation and deep-learning techniques for enhancing LDV-based speech recordings in practical applications.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143766092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perception-production link mediated by position in the imitation of Korean nasal stops.","authors":"Jiwon Hwang, Yu-An Lu","doi":"10.1121/10.0036057","DOIUrl":"10.1121/10.0036057","url":null,"abstract":"<p><p>This study explores how perceptual cues in two positions influence imitation of Korean nasal stops. As a result of initial denasalization, nasality cues are secondary in the initial position but primary in the medial position. Categorization and imitation tasks using CV (consonant-vowel) and VCV (vowel-consonant-vowel) items on a continuum from voiced oral to nasal stops were completed by 32 Korean speakers. Results revealed categorical imitation of nasality medially, whereas imitation was gradient or minimal initially. Furthermore, individuals requiring stronger nasality cues to categorize a nasal sound produced greater nasality in imitation. These findings highlight a perception-production link mediated by positional cue reliance.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial grouping as a method to improve personalized head-related transfer function prediction.","authors":"Keng-Wei Chang, Yih-Liang Shen, Tai-Shih Chi","doi":"10.1121/10.0036032","DOIUrl":"10.1121/10.0036032","url":null,"abstract":"<p><p>The head-related transfer function (HRTF) characterizes the frequency response of the sound traveling path between a specific location and the ear. When it comes to estimating HRTFs by neural network models, angle-specific models greatly outperform global models but demand high computational resources. To balance the computational resource and performance, we propose a method by grouping HRTF data spatially to reduce variance within each subspace. HRTF predicting neural network is then trained for each subspace. Results show the proposed method performs better than global models and angle-specific models by using different grouping strategies at the ipsilateral and contralateral sides.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Variation in the production of nasal coarticulation by speaker age and speech style.","authors":"Georgia Zellou, Michelle Cohn","doi":"10.1121/10.0036227","DOIUrl":"10.1121/10.0036227","url":null,"abstract":"<p><p>This study investigates apparent-time variation in the production of anticipatory nasal coarticulation in California English. Productions of consonant-vowel-nasal words in clear vs casual speech by 58 speakers aged 18-58 (grouped into three generations) were analyzed for degree of coarticulatory vowel nasality. Results reveal an interaction between age and style: the two younger speaker groups produce greater coarticulation (measured as A1-P0) in clear speech, whereas older speakers produce less variable coarticulation across styles. Yet, duration lengthening in clear speech is stable across ages. Thus, age- and style-conditioned changes in produced coarticulation interact as part of change in coarticulation grammars over time.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Holly Bradley, Madeleine E Yu, Elizabeth K Johnson
{"title":"Voice assistant technology continues to underperform on children's speech.","authors":"Holly Bradley, Madeleine E Yu, Elizabeth K Johnson","doi":"10.1121/10.0036052","DOIUrl":"10.1121/10.0036052","url":null,"abstract":"<p><p>Voice assistant (VA) technology is increasingly part of children's everyday lives. But how well do these systems understand children? No study has asked this with children under 5 years old. Here, two versions of Siri, and one of Alexa, were tested on their ability to transcribe utterances produced by 2-, 3-, and 5-year-olds. Human listeners (mothers and undergraduates) were also tested. Results showed that while Siri's performance on children's speech has improved in recent years, even the newest Siri and Alexa models struggle with children's speech. Human listeners far outperformed VA systems with all ages, especially with the youngest children's speech.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}