Hongyan Zhang, Linfeng Wang, Xin Chen, Jian Li, Yiwei Liu, Haichao Liu, Yang Liu
{"title":"On propagation characteristics of ultrasonic guided waves in layered fluid-saturated porous media using spectral method.","authors":"Hongyan Zhang, Linfeng Wang, Xin Chen, Jian Li, Yiwei Liu, Haichao Liu, Yang Liu","doi":"10.1121/10.0034232","DOIUrl":"10.1121/10.0034232","url":null,"abstract":"<p><p>Fluid-saturated porous media plays an increasingly important role in emerging fields such as lithium batteries and artificial bones. Accurately solving the governing equations of guided wave is the key to the successful application of ultrasonic guided wave nondestructive testing technology in fluid-saturated porous media. This paper derives the Lamb wave equation in layered fluid-saturated porous materials based on Biot theory and proposes the spectral method suitable for solving complex wave equations. The spectral method reconstructs the fundamental wave equations in the form of a matrix eigenvalue problem using spectral differentiation matrices. It introduces boundary conditions by replacing corresponding rows in the wave equation matrix with stress or displacement in matrix form. For complex differential equations, such as the governing equations of guided waves in porous media, the spectral method has the significant advantages of faster computation speed, less root loss, and easier encoding process. The spectral method is used to calculate the acoustic field characteristics under different boundary conditions and environments of the layer fluid-saturated porous media. Results show that the surface treatment details and environment of fluid-saturated porous materials play an important role in the propagation of guided waves.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142568997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sound source locations and their roles in Japanese voiceless \"glottal\" fricative production.","authors":"Tsukasa Yoshinaga, Kikuo Maekawa, Akiyoshi Iida","doi":"10.1121/10.0034229","DOIUrl":"10.1121/10.0034229","url":null,"abstract":"<p><p>Although [h] is described as a glottal fricative, it has never been demonstrated whether [h] has its source exclusively at the glottis. In this study, sound source locations and their influence on sound amplitudes were investigated by conducting mechanical experiments and airflow simulations. Vocal tract data of [h] were obtained in three phonemic contexts from two native Japanese subjects using three-dimensional static magnetic resonance imaging (MRI). Acrylic vocal tract replicas were constructed, and the sound was reproduced by supplying airflow to the vocal tracts with adducted or abducted vocal folds. The sound source locations were estimated by solving the Navier-Stokes equations. The results showed that the amplitudes of sounds produced by the vocal tracts with an open glottis were in a similar range (±3 dB) to those with a glottal gap of 3 mm in some contexts. The sound sources in these cases were observed in the pharyngeal cavity or near the soft palate. Similar degrees of oral constrictions were observed in the real-time MRI, indicating that the sound traditionally described as [h] is produced, at least in some contexts, with sound sources of turbulent flow generated by a supralaryngeal constriction of the following vowel.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142558112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manish Manohare, Francesco Aletta, Tin Oberman, Rajasekar Elangovan, Manoranjan Parida, Jian Kang
{"title":"Cross-country variation in psychophysiological responses to traffic noise exposure: Laboratory experiments in India and the UKa).","authors":"Manish Manohare, Francesco Aletta, Tin Oberman, Rajasekar Elangovan, Manoranjan Parida, Jian Kang","doi":"10.1121/10.0034242","DOIUrl":"https://doi.org/10.1121/10.0034242","url":null,"abstract":"<p><p>Traffic noise exposure has detrimental effects on human health, including both auditory and nonauditory impacts. As one such nonauditory factor, individuals and communities in different countries may exhibit different patterns of noise sensitivity and corresponding tolerance levels, leading to a change in overall noise perception. This paper investigated the cross-country differences in psychophysiological responses to traffic noise exposure between Indian and British individuals. A psychophysiological signal-based [heart rate variability (HRV) and skin conductance response (SCR)] listening experiment was conducted in India and the United Kingdom to analyze changes in noise perception and psychophysiological responses resulting from exposure to the same noise stimuli. HRV analysis indicated greater cardiovascular impact in the British group due to a significant increase in heart rate (W = 653, p < 0.01). Also, a significant increase in the SCR (W = 535, p < 0.001) was noted, indicating a greater level of physiological stress among British participants due to traffic noise stimuli. These findings highlight the difference in noise perception due to cross-country variation using psychophysiological responses. Understanding these cross-country differences can inform targeted interventions and policies to mitigate the adverse effects of traffic noise on human well-being.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingyue Huo, Yinglun Sun, Daniel Fogerty, Yan Tang
{"title":"Release from same-talker speech-in-speech masking: Effects of masker intelligibility and other contributing factorsa).","authors":"Mingyue Huo, Yinglun Sun, Daniel Fogerty, Yan Tang","doi":"10.1121/10.0034235","DOIUrl":"https://doi.org/10.1121/10.0034235","url":null,"abstract":"<p><p>Human speech perception declines in the presence of masking speech, particularly when the masker is intelligible and acoustically similar to the target. A prior investigation demonstrated a substantial reduction in masking when the intelligibility of competing speech was reduced by corrupting voiced segments with noise [Huo, Sun, Fogerty, and Tang (2023), \"Quantifying informational masking due to masker intelligibility in same-talker speech-in-speech perception,\" in Interspeech 2023, pp. 1783-1787]. As this processing also reduced the prominence of voiced segments, it was unclear whether the unmasking was due to reduced linguistic content, acoustic similarity, or both. The current study compared the masking of original competing speech (high intelligibility) to competing speech with time reversal of voiced segments (VS-reversed, low intelligibility) at various target-to-masker ratios. Modeling results demonstrated similar energetic masking between the two maskers. However, intelligibility of the target speech was considerably better with the VS-reversed masker compared to the original masker, likely due to the reduced linguistic content. Further corrupting the masker's voiced segments resulted in additional release from masking. Acoustic analyses showed that the portion of target voiced segments overlapping with masker voiced segments and the similarity between target and masker overlapped voiced segments impacted listeners' speech recognition. Evidence also suggested modulation masking in the spectro-temporal domain interferes with listeners' ability to glimpse the target.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142558111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vahid Delaram, Margaret K Miller, Rohit M Ananthanarayana, Allison Trine, Emily Buss, G Christopher Stecker, Brian B Monson
{"title":"Gender and speech material effects on the long-term average speech spectrum, including at extended high frequencies.","authors":"Vahid Delaram, Margaret K Miller, Rohit M Ananthanarayana, Allison Trine, Emily Buss, G Christopher Stecker, Brian B Monson","doi":"10.1121/10.0034231","DOIUrl":"10.1121/10.0034231","url":null,"abstract":"<p><p>Gender and language effects on the long-term average speech spectrum (LTASS) have been reported, but typically using recordings that were bandlimited and/or failed to accurately capture extended high frequencies (EHFs). Accurate characterization of the full-band LTASS is warranted given recent data on the contribution of EHFs to speech perception. The present study characterized the LTASS for high-fidelity, anechoic recordings of males and females producing Bamford-Kowal-Bench sentences, digits, and unscripted narratives. Gender had an effect on spectral levels at both ends of the spectrum: males had higher levels than females below approximately 160 Hz, owing to lower fundamental frequencies; females had ∼4 dB higher levels at EHFs, but this effect was dependent on speech material. Gender differences were also observed at ∼300 Hz, and between 800 and 1000 Hz, as previously reported. Despite differences in phonetic content, there were only small, gender-dependent differences in EHF levels across speech materials. EHF levels were highly correlated across materials, indicating relative consistency within talkers. Our findings suggest that LTASS levels at EHFs are influenced primarily by talker and gender, highlighting the need for future research to assess whether EHF cues are more audible for female speech than for male speech.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540443/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Binaural fusion: Complexities in definition and measurement.","authors":"Lina A J Reiss, Matthew J Goupell","doi":"10.1121/10.0030476","DOIUrl":"10.1121/10.0030476","url":null,"abstract":"<p><p>Despite the growing interest in studying binaural fusion, there is little consensus over its definition or how it is best measured. This review seeks to describe the complexities of binaural fusion, highlight measurement challenges, provide guidelines for rigorous perceptual measurements, and provide a working definition that encompasses this information. First, it is argued that binaural fusion may be multidimensional and might occur in one domain but not others, such as fusion in the spatial but not the spectral domain or vice versa. Second, binaural fusion may occur on a continuous scale rather than on a binary one. Third, binaural fusion responses are highly idiosyncratic, which could be a result of methodology, such as the specific experimental instructions, suggesting a need to explicitly report the instructions given. Fourth, it is possible that direct (\"Did you hear one sound or two?\") and indirect (\"Where did the sound come from?\" or \"What was the pitch of the sound?\") measurements of fusion will produce different results. In conclusion, explicit consideration of these attributes and reporting of methodology are needed for rigorous interpretation and comparison across studies and listener populations.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11470809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142400584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew W Walters, Oleg A Godin, John E Joseph, Tsu Wei Tan
{"title":"Deep-water ambient sound over the Atlantis II seamounts in the Northwest Atlantica).","authors":"Matthew W Walters, Oleg A Godin, John E Joseph, Tsu Wei Tan","doi":"10.1121/10.0032360","DOIUrl":"https://doi.org/10.1121/10.0032360","url":null,"abstract":"<p><p>Ambient sound was continuously recorded for 52 days by three synchronized, single-hydrophone, near-bottom receivers. The receivers were moored at depths of 2573, 2994, and 4443 m on flanks and in a trough between the edifices of the Atlantis II seamounts. The data reveal the power spectra and intermittency of the ambient sound intensity in a 13-octave frequency band from 0.5 to 4000 Hz. Statistical distribution of sound intensity exhibits much heavier tails than in the expected exponential intensity distribution throughout the frequency band of observations. It is established with high statistical significance that the data are incompatible with the common assumption of normally distributed ambient noise in deep water. Spatial variability of the observed ambient sound appears to be controlled by the seafloor properties, bathymetric shadowing, and nonuniform distribution of the noise sources on the sea surface. Temporal variability of ambient sound is dominated by changes in the wind speed and the position of the Gulf Stream relative to the experiment site. Ambient sound intensity increases by 4-10 dB when the Gulf Stream axis is within 25 km from the receivers. The sound intensification is attributed to the effect of the Gulf Stream current on surface wave breaking.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142468568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Khaled Mohsen Helal, Nicolai von Oppeln-Bronikowski, Lorenzo Moro
{"title":"Advancing glider-based acoustic measurements of underwater-radiated ship noise.","authors":"Khaled Mohsen Helal, Nicolai von Oppeln-Bronikowski, Lorenzo Moro","doi":"10.1121/10.0032357","DOIUrl":"https://doi.org/10.1121/10.0032357","url":null,"abstract":"<p><p>Ocean gliders are versatile and efficient passive acoustic monitoring platforms in remote marine environments, but few studies have examined their potential to monitor ship underwater noise. This study investigates a Slocum glider's capability to assess ship noise compared to the ability of fixed observers. Trials were conducted in shallow coastal inlets and deep bays in Newfoundland, Canada, using a glider, hydrophone array, and single-moored system. The study focused on (1) the glider's self-noise signature, (2) range-depth-dependent propagation loss (PL) models, and (3) identifying the location of the vessel to the glider using glider acoustic measurements. The primary contributors to the glider's self-noise were the buoyancy pump and rudder. The pitch-motor noise coincided with the buoyancy pump activation and did not contribute to the glider self-noise in our experiments. PL models showed that seafloor bathymetry and sound speed profiles significantly impacted estimates compared to models assuming flat and range-independent profiles. The glider's performance in recording ship noise was superior to that of other platforms. Using its hydrophones, the glider could identify the bearing from the vessel, although a third hydrophone would improve reliability and provide range. The findings demonstrate that gliders can characterize noise and enhance our understanding of ocean sound sources.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142468562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Theoretical modeling and parameter identification of balanced armature loudspeakers.","authors":"Wei Liu, Jie Huang, Jiazheng Cheng, Yong Shen","doi":"10.1121/10.0030465","DOIUrl":"https://doi.org/10.1121/10.0030465","url":null,"abstract":"<p><p>Theoretical modeling and parameter identification are essential for optimizing loudspeaker performance and enabling active control. Although relevant theories for moving-coil loudspeakers are well-developed, accurate theoretical modeling and parameter identification methods for balanced armature loudspeakers (BALs) are scant. This study proposes a model using the equivalent circuit method (ECM) for BALs, with consideration of the armature-suspension coupling as well as the non-piston vibration of the diaphragm. Based on the proposed ECM model, a time-domain identification algorithm utilizing measured voltage, current, and displacement data is established to identify the necessary parameters. Employing the theoretical model and proposed identification method, the model parameters of two different BALs are measured. Comparisons between experimental and numerical results demonstrate the accuracy and effectiveness of the proposed model and identification method in predicting impedance, displacement, and sound pressure responses.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142468606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating street-view images to quantify the urban soundscape: Case study of Fuzhou City's main urban areaa).","authors":"Quanquan Rui, Kunpeng Gu, Huishan Cheng","doi":"10.1121/10.0029026","DOIUrl":"https://doi.org/10.1121/10.0029026","url":null,"abstract":"<p><p>Soundscapes are an important part of urban landscapes and play a key role in the health and well-being of citizens. However, predicting soundscapes over a large area with fine resolution remains a great challenge and traditional methods are time-consuming and require laborious large-scale noise detection work. Therefore, this study utilized machine learning algorithms and street-view images to estimate a large-area urban soundscape. First, a computer vision method was applied to extract landscape visual feature indicators from large-area streetscape images. Second, the 15 collected soundscape indicators were correlated with landscape visual indicators to construct a prediction model, which was applied to estimate large-area urban soundscapes. Empirical evidence from 98 000 street-view images in Fuzhou City indicated that street-view images can be used to predict street soundscapes, validating the effectiveness of machine learning algorithms in soundscape prediction.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142349031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}