Priyabrata Saha, Richard X Touret, Etienne Ollivier, Jihui Jin, Matthew McKinley, Justin Romberg, Karim G Sabra
{"title":"Leveraging sound speed dynamics and generative deep learning for ray-based ocean acoustic tomography.","authors":"Priyabrata Saha, Richard X Touret, Etienne Ollivier, Jihui Jin, Matthew McKinley, Justin Romberg, Karim G Sabra","doi":"10.1121/10.0036312","DOIUrl":"https://doi.org/10.1121/10.0036312","url":null,"abstract":"<p><p>A generative deep learning framework is introduced for ray-based ocean acoustic tomography (OAT), an inverse problem for estimating sound speed profiles (SSP) based on arrival-times measurements between multiple acoustic transducers, which is typically ill-posed. This framework relies on a robust low-dimensional parametrization of the expected SSP variations using a variational autoencoder and a linear dynamical model as further regularization. This framework was tested using SSP variations simulated by a regional ocean model with submesoscale permitting horizontal resolution and various transducer configurations spanning the upper ocean over short propagation ranges and was found to outperform conventional linear least squares formulations of OAT.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143756277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Léa Bouffaut, Quentin Goestchel, Robin André Rørstadbotnen, Anthony Sladen, Arthur Hartog, Holger Klinck
{"title":"Estimating sound pressure levels from distributed acoustic sensing data using 20 Hz fin whale calls.","authors":"Léa Bouffaut, Quentin Goestchel, Robin André Rørstadbotnen, Anthony Sladen, Arthur Hartog, Holger Klinck","doi":"10.1121/10.0036351","DOIUrl":"https://doi.org/10.1121/10.0036351","url":null,"abstract":"<p><p>Distributed acoustic sensing (DAS) is a promising technology for underwater acoustics, but its instrumental response is still being investigated to enable quantitative measurements. We use fin whale 20 Hz calls to estimate the conversion between DAS-recorded strain and acoustic pressure. Our method is tested across three deployments on varied seafloor telecommunication cables and ocean basins. Results show that after accounting for well-established DAS response factors, a unique value for water compressibility provides a good estimate for the conversion. This work represents a significant step forward in characterizing DAS for marine monitoring and highlights potential limitations related to instrument noise floor.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards more accurate sound field verification using directional acoustic filtering.","authors":"Emily Barosin, Kaustubha Raghukumar","doi":"10.1121/10.0036394","DOIUrl":"https://doi.org/10.1121/10.0036394","url":null,"abstract":"<p><p>Attributing omnidirectional sound levels to a specific source in the ocean can be challenging when there are multiple competing sources of sound such as boats, or biological activity. Here, we present a method to directionally filter acoustic measurements based on vector measurements of acoustic pressure and particle velocity. The directional discrimination is applied to estimate sound energy from two marine energy sources: sound generated during the decommissioning of an oil platform and those from an operating tidal energy converter. The application of a directional mask leads to distinctly different spectra and some differences in energy, relative to the unmasked scenarios.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143796883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stan E Dosso, Preston S Wilson, David P Knobles, Julien Bonnel
{"title":"Bayesian matched-field inversion for shear and compressional geoacoustic profiles at the New England Mud Patcha).","authors":"Stan E Dosso, Preston S Wilson, David P Knobles, Julien Bonnel","doi":"10.1121/10.0036374","DOIUrl":"https://doi.org/10.1121/10.0036374","url":null,"abstract":"<p><p>This Letter estimates shear and compressional seabed geoacoustic profiles at the New England Mud Patch through trans-dimensional Bayesian inversion of matched-field acoustic data over a 20-2000 Hz bandwidth. Results indicate low shear-wave speeds (∼35 m/s) with relatively small uncertainties over most of the upper mud layer, increasing in underlying transition and sand layers. Compressional parameters, including attenuation, are also well estimated, but shear-wave attenuation is poorly determined. Comparison of inversions with/without shear parameters and consideration of inter-parameter correlations indicate that estimates of compressional parameters are not substantially influenced by shear effects, with the possible exception of compressional-wave attenuation in the sand layer.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jia-Wei Chen, Jia-Hui Li, Yi-Hao Jiang, Yi-Chang Wu, Ying-Hui Lai
{"title":"Enhancing speech intelligibility in optical microphone systems through physics-informed data augmentation.","authors":"Jia-Wei Chen, Jia-Hui Li, Yi-Hao Jiang, Yi-Chang Wu, Ying-Hui Lai","doi":"10.1121/10.0036356","DOIUrl":"10.1121/10.0036356","url":null,"abstract":"<p><p>Laser doppler vibrometers (LDVs) facilitate noncontact speech acquisition; however, they are prone to material-dependent spectral distortions and speckle noise, which degrade intelligibility in noisy environments. This study proposes a data augmentation method that incorporates material-specific and impulse noises to simulate LDV-induced distortions. The proposed approach utilizes a gated convolutional neural network with HiFi-GAN to enhance speech intelligibility across various material and low signal-to-noise ratio (SNR) conditions, achieving a short-time objective intelligibility score of 0.76 at 0 dB SNR. These findings provide valuable insights into optimized augmentation and deep-learning techniques for enhancing LDV-based speech recordings in practical applications.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 4","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143766092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial grouping as a method to improve personalized head-related transfer function prediction.","authors":"Keng-Wei Chang, Yih-Liang Shen, Tai-Shih Chi","doi":"10.1121/10.0036032","DOIUrl":"10.1121/10.0036032","url":null,"abstract":"<p><p>The head-related transfer function (HRTF) characterizes the frequency response of the sound traveling path between a specific location and the ear. When it comes to estimating HRTFs by neural network models, angle-specific models greatly outperform global models but demand high computational resources. To balance the computational resource and performance, we propose a method by grouping HRTF data spatially to reduce variance within each subspace. HRTF predicting neural network is then trained for each subspace. Results show the proposed method performs better than global models and angle-specific models by using different grouping strategies at the ipsilateral and contralateral sides.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perception-production link mediated by position in the imitation of Korean nasal stops.","authors":"Jiwon Hwang, Yu-An Lu","doi":"10.1121/10.0036057","DOIUrl":"10.1121/10.0036057","url":null,"abstract":"<p><p>This study explores how perceptual cues in two positions influence imitation of Korean nasal stops. As a result of initial denasalization, nasality cues are secondary in the initial position but primary in the medial position. Categorization and imitation tasks using CV (consonant-vowel) and VCV (vowel-consonant-vowel) items on a continuum from voiced oral to nasal stops were completed by 32 Korean speakers. Results revealed categorical imitation of nasality medially, whereas imitation was gradient or minimal initially. Furthermore, individuals requiring stronger nasality cues to categorize a nasal sound produced greater nasality in imitation. These findings highlight a perception-production link mediated by positional cue reliance.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Variation in the production of nasal coarticulation by speaker age and speech style.","authors":"Georgia Zellou, Michelle Cohn","doi":"10.1121/10.0036227","DOIUrl":"10.1121/10.0036227","url":null,"abstract":"<p><p>This study investigates apparent-time variation in the production of anticipatory nasal coarticulation in California English. Productions of consonant-vowel-nasal words in clear vs casual speech by 58 speakers aged 18-58 (grouped into three generations) were analyzed for degree of coarticulatory vowel nasality. Results reveal an interaction between age and style: the two younger speaker groups produce greater coarticulation (measured as A1-P0) in clear speech, whereas older speakers produce less variable coarticulation across styles. Yet, duration lengthening in clear speech is stable across ages. Thus, age- and style-conditioned changes in produced coarticulation interact as part of change in coarticulation grammars over time.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bone mineral density and hydroxyapatite alignment in leg cortical bone influence on ultrasound velocity.","authors":"Shuta Kodama, Hiroshi Mita, Norihisa Tamura, Daisuke Koyama, Mami Matsukawa","doi":"10.1121/10.0036082","DOIUrl":"10.1121/10.0036082","url":null,"abstract":"<p><p>Bone diagnosis using x-ray techniques, such as computed tomography and dual-energy x-ray absorptiometry, can evaluate bone mineral density (BMD) and microstructure but does not provide elastic properties. This study investigated the ultrasonic properties of racehorse leg cortical bone, focusing on the relationship between wave velocity, BMD, and hydroxyapatite (HAp) crystallite alignment. The results showed a strong correlation between wave velocity and BMD, suggesting that quantitative ultrasound-obtained wave velocity is primarily influenced by BMD, followed by the HAp alignment direction.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143588552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Holly Bradley, Madeleine E Yu, Elizabeth K Johnson
{"title":"Voice assistant technology continues to underperform on children's speech.","authors":"Holly Bradley, Madeleine E Yu, Elizabeth K Johnson","doi":"10.1121/10.0036052","DOIUrl":"10.1121/10.0036052","url":null,"abstract":"<p><p>Voice assistant (VA) technology is increasingly part of children's everyday lives. But how well do these systems understand children? No study has asked this with children under 5 years old. Here, two versions of Siri, and one of Alexa, were tested on their ability to transcribe utterances produced by 2-, 3-, and 5-year-olds. Human listeners (mothers and undergraduates) were also tested. Results showed that while Siri's performance on children's speech has improved in recent years, even the newest Siri and Alexa models struggle with children's speech. Human listeners far outperformed VA systems with all ages, especially with the youngest children's speech.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"5 3","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}