{"title":"Evolutionary mechanism of Y-branches in acoustic Lichtenberg figures just below the water surface.","authors":"Zhaokang Lei, Xinran Dong, Xinyi Zuo, Chenghui Wang, Yaorong Wu, Shuyu Lin, Jianzhong Guo","doi":"10.1121/10.0034365","DOIUrl":"https://doi.org/10.1121/10.0034365","url":null,"abstract":"<p><p>The acoustic Lichtenberg figure (ALF) in an ultrasonic cleaner with a frequency of 28 kHz at different power levels was observed using high-speed photography. The nonlinear response of the cavitation structure was analyzed by the entropy spectrum in the ALF images, which showed the modulation influence of the primary acoustic field, exhibiting the fluctuations of the bubble distribution with time. Typical Y-branches predict the paths by which surrounding bubbles are attracted and converge into the structure, the branches are curved due to bubble-bubble interactions, and the curvature increases as the bubbles are approaching the main chain. The average travelling speed of bubbles along the branches is about 1.1 m/s, almost independent of power level of the ultrasonic cleaner. A theoretical model consisting of free bubbles and a straight bubble chain of finite length was developed to explore the evolutionary mechanism of branching. It was found that the bubble trajectories showed a bending tendency similar to the experimentally observed Y-branches, and the stationary straight bubble chain parallel to the main chain could evolve into a curved chain and eventually become a branch of the main chain. The theoretical predictions agree well with the experimental results, verifying the evolutionary mechanism of Y-branches in ALF.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"156 5","pages":"3373-3383"},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142648180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiang Wu, Michael Gray, Cameron A B Smith, Luca Bau, Robin O Cleveland, Constantin Coussios, Eleanor Stride
{"title":"Challenges in classifying cavitation: Correlating high-speed optical imaging and passive acoustic mapping of cavitation dynamics.","authors":"Qiang Wu, Michael Gray, Cameron A B Smith, Luca Bau, Robin O Cleveland, Constantin Coussios, Eleanor Stride","doi":"10.1121/10.0034426","DOIUrl":"https://doi.org/10.1121/10.0034426","url":null,"abstract":"<p><p>Both the biological effects and acoustic emissions generated by cavitation are functions of bubble dynamics. Monitoring of acoustic emissions is therefore desirable to improve treatment safety and efficacy. The relationship between the emission spectra and bubble dynamics is, however, complex. The aim of this study was to characterise this relationship for single microbubbles using simultaneous ultra-high-speed optical imaging and passive acoustic mapping of cavitation emissions. As expected, both the number of discrete harmonics and broadband content in the emissions increased with increasing amplitude of bubble oscillation, but the spectral content was also dependent upon other variables, including the frequency of bubble collapse and receiving transducer characteristics. Moreover, phenomena, such as fragmentation and microjetting, could not be distinguished from spherical oscillations when using the full duration acoustic waveform to calculate the emission spectra. There was also no correlation between the detection of broadband noise and widely used thresholds for distinguishing bubble dynamics. It is therefore concluded that binary categorisations, such as stable and inertial cavitation, should be avoided, and different types of bubble behavior should not be inferred on the basis of frequency content alone. Treatment monitoring criteria should instead be defined according to the relevant bioeffect(s) for a particular application.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"156 5","pages":"3608-3620"},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142716388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Erratum: Linear sweeps and the characterisation of linearly time-varying acoustical systems [J. Acoust. Soc. Am. 155, 2794-2802 (2024)].","authors":"Hammad Hussain, Guillaume Dutilleux","doi":"10.1121/10.0034355","DOIUrl":"https://doi.org/10.1121/10.0034355","url":null,"abstract":"","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"156 5","pages":"3140-3142"},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142622984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yonghee Oh, Nicole Dean, Frederick J Gallun, Lina A J Reiss
{"title":"Sequential auditory grouping reduces binaural pitch fusion in listeners with normal hearing, hearing aids, and cochlear implantsa).","authors":"Yonghee Oh, Nicole Dean, Frederick J Gallun, Lina A J Reiss","doi":"10.1121/10.0034366","DOIUrl":"10.1121/10.0034366","url":null,"abstract":"<p><p>Binaural pitch fusion, the perceptual integration of dichotically presented stimuli that evoke different pitches, can be considered a type of simultaneous grouping. Hence, auditory streaming cues such as temporally flanking stimuli that promote sequential grouping might compete with simultaneous dichotic grouping to reduce binaural fusion. Here, we measured binaural pitch fusion using an auditory streaming task in normal-hearing listeners and hearing-impaired listeners with hearing aids and/or cochlear implants. Fusion ranges, the frequency or electrode ranges over which binaural pitch fusion occurs, were measured in a streaming paradigm using 10 alterations of a dichotic reference/comparison stimulus with a diotic capture stimulus, with fusion indicated by perception of a single stream. Stimuli were pure tones or electric pulse trains depending on the hearing device, with frequency or electrode varied across trials for comparison stimuli. Fusion ranges were also measured for the corresponding isolated stimulus conditions with the same stimulus durations. For all groups, fusion ranges decreased by up to three times in the streaming paradigm compared to the corresponding isolated stimulus paradigm. Hearing-impaired listeners showed greater reductions in fusion than normal-hearing listeners. The findings add further evidence that binaural pitch fusion is moderated by central processes involved in auditory grouping or segregation.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"156 5","pages":"3217-3231"},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11563690/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142623031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiaqi Quan, Lin Xu, Yangyang Fu, Lei Gao, Huanyang Chen, Yadong Xu
{"title":"Integer multi-wavelength gradient phase metagrating for perfect refraction: Phase choice freedom in supercella).","authors":"Jiaqi Quan, Lin Xu, Yangyang Fu, Lei Gao, Huanyang Chen, Yadong Xu","doi":"10.1121/10.0034239","DOIUrl":"https://doi.org/10.1121/10.0034239","url":null,"abstract":"<p><p>Phase gradient metagratings (PGMs) reshape the impinging wavefront though the interplay between the linear adjacent phase increment inside supercells and the grating diffraction of supercells. However, the adjacent phase increment is elaborately designed by tuning the resonance of each subcell at a certain target frequency, which inevitably confines PGMs to operate only at the single frequency in turn. We notice that there exists a freedom of phase choice with a multi-2π increment in a supercell of PGMs, whereas conventional designs focus on the 2π increment. This freedom can induce a collaborative mechanism of surface impedance matching and multi-wavelength subcells, enabling the design of PGMs at multi-wavelengths. We further design and fabricate a supercell consisting of eight curved pipes to construct the two-wavelengths PGMs. The linear adjacent phase gradient of 0.25π at the fundamental frequency 3430 Hz is achieved, while the almost perfect transmission effect is observed due to the impedance match at the ends of curved pipes. In addition, the transmission field at the double frequency 6860 Hz is measured, whose refraction direction is consistent with that at 3430 Hz. This design strategy originated from phase choice freedom in the supercell and the experimental fabrication might stimulate applications on other multi-wavelength metasurfaces/metagratings.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"156 5","pages":"2982-2988"},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142558106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nonlinear acoustic modulation utilizing designed acoustic bubble array.","authors":"Zhaoyu Deng, Zhichao Ma, Xiaozhou Liu","doi":"10.1121/10.0034241","DOIUrl":"https://doi.org/10.1121/10.0034241","url":null,"abstract":"<p><p>Acoustic modulation has attracted significant investigative interest for their outstanding promising application scenes. Furthermore, acoustic bubble array has shown anticipated foreground in signal processing and acoustic manipulation. Here, we demonstrate a nonlinear acoustic modulation method via designed acoustic bubble array. Numerical calculations have been conducted to analyze several influential parameters and the corresponding effects on the vibrational behaviors of the acoustic bubbles. Appropriate corrections have been added on the numerical model to elucidate the physical scene. Experimental validation has confirmed the practicability and validity of the designation. Potential applications in biological tissue imaging and unidirectional sound transmission can be expected with further research.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"156 5","pages":"3080-3087"},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nele De Poortere, Sarineh Keshishzadeh, Hannah Keppler, Ingeborg Dhooge, Sarah Verhulst
{"title":"Intrasubject variability in potential early markers of sensorineural hearing damage.","authors":"Nele De Poortere, Sarineh Keshishzadeh, Hannah Keppler, Ingeborg Dhooge, Sarah Verhulst","doi":"10.1121/10.0034423","DOIUrl":"https://doi.org/10.1121/10.0034423","url":null,"abstract":"<p><p>The quest for noninvasive early markers for sensorineural hearing loss (SNHL) has yielded diverse measures of interest. However, comprehensive studies evaluating the test-retest reliability of multiple measures and stimuli within a single study are scarce, and a standardized clinical protocol for robust early markers of SNHL remains elusive. To address these gaps, this study explores the intra-subject variability of various potential electroencephalogram- (EEG-) biomarkers for cochlear synaptopathy (CS) and other SNHL-markers in the same individuals. Fifteen normal-hearing young adults underwent repeated measures of (extended high-frequency) pure-tone audiometry, speech-in-noise intelligibility, distortion-product otoacoustic emissions (DPOAEs), and auditory evoked potentials; comprising envelope following responses (EFR) and auditory brainstem responses (ABR). Results confirm high reliability in pure-tone audiometry, whereas the matrix sentence-test exhibited a significant learning effect. The reliability of DPOAEs varied across three evaluation methods, each employing distinct SNR-based criteria for DPOAE-datapoints. EFRs exhibited superior test-retest reliability compared to ABR-amplitudes. Our findings emphasize the need for careful interpretation of presumed noninvasive SNHL measures. While tonal-audiometry's robustness was corroborated, we observed a confounding learning effect in longitudinal speech audiometry. The variability in DPOAEs highlights the importance of consistent ear probe replacement and meticulous measurement techniques, indicating that DPOAE test-retest reliability is significantly compromised under less-than-ideal conditions. As potential EEG-biomarkers of CS, EFRs are preferred over ABR-amplitudes based on the current study results.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"156 5","pages":"3480-3495"},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142676044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adam K Bosen, Peter A Wasiuk, Lauren Calandruccio, Emily Buss
{"title":"Frequency importance for sentence recognition in co-located noise, co-located speech, and spatially separated speech.","authors":"Adam K Bosen, Peter A Wasiuk, Lauren Calandruccio, Emily Buss","doi":"10.1121/10.0034412","DOIUrl":"10.1121/10.0034412","url":null,"abstract":"<p><p>Frequency importance functions quantify the contribution of spectral frequencies to perception. Frequency importance has been well-characterized for speech recognition in quiet and steady-state noise. However, it is currently unknown whether frequency importance estimates generalize to more complex conditions such as listening in a multi-talker masker or when targets and maskers are spatially separated. Here, frequency importance was estimated by quantifying associations between local target-to-masker ratios at the output of an auditory filterbank and keyword recognition accuracy for sentences. Unlike traditional methods used to measure frequency importance, this technique estimates frequency importance without modifying the acoustic properties of the target or masker. Frequency importance was compared across sentences in noise and a two-talker masker, as well as sentences in a two-talker masker that was either co-located with or spatially separated from the target. Results indicate that frequency importance depends on masker type and spatial configuration. Frequencies above 5 kHz had lower importance and frequencies between 600 and 1900 Hz had higher importance in the presence of a two-talker masker relative to a noise masker. Spatial separation increased the importance of frequencies between 600 Hz and 5 kHz. Thus, frequency importance functions vary across listening conditions.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"156 5","pages":"3275-3284"},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142638726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lican Wang, Zhenjun Peng, Bao Chen, Zhida Ma, Wangqiao Chen, Peng Zhou, Guocheng Zhou, Siyang Zhong
{"title":"Acoustic imaging of geometrically shielded sound sources using tailored Green's functions.","authors":"Lican Wang, Zhenjun Peng, Bao Chen, Zhida Ma, Wangqiao Chen, Peng Zhou, Guocheng Zhou, Siyang Zhong","doi":"10.1121/10.0034353","DOIUrl":"https://doi.org/10.1121/10.0034353","url":null,"abstract":"<p><p>In light of the growing market of urban air mobility, it is crucial to accurately detect the stationary or moving noise sources within the complex scattering environments caused by aircraft structures such as airframes and engines. This study combines conventional and wavelet-based beamforming techniques with an acoustic scattering prediction method to develop an acoustic imaging approach that considers scattering effects. Tailored Green's function is numerically evaluated and used to compute the steering vectors and the specific delayed time used in those beamforming methods. By examining common scenarios where a scatterer is positioned between the source plane and the array plane, it is observed that beamforming in a scattering environment differs from that in free space, leading to improved resolution alongside scattering-induced side lobes. The effectiveness of the developed method is validated through numerical simulations and experimental studies, confirming its improved ability to localize both stationary and rotating sound sources in a shielded environment. This advancement offers effective techniques for acoustic measurement and fault monitoring in the presence of structural scatterers.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"156 5","pages":"3102-3111"},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142622978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Testing the role of temporal coherence on speech intelligibility with noise and single-talker maskers.","authors":"Jaeeun Lee, Andrew J Oxenham","doi":"10.1121/10.0034420","DOIUrl":"10.1121/10.0034420","url":null,"abstract":"<p><p>Temporal coherence, where sounds with aligned timing patterns are perceived as a single source, is considered an essential cue in auditory scene analysis. However, its effects have been studied primarily with simple repeating tones, rather than speech. This study investigated the role of temporal coherence in speech by introducing across-frequency asynchronies. The effect of asynchrony on the intelligibility of target sentences was tested in the presence of background speech-shaped noise or a single-talker interferer. Our hypothesis was that disrupting temporal coherence should not only reduce intelligibility but also impair listeners' ability to segregate the target speech from an interfering talker, leading to greater degradation for speech-in-speech than speech-in-noise tasks. Stimuli were filtered into eight frequency bands, which were then desynchronized with delays of 0-120 ms. As expected, intelligibility declined as asynchrony increased. However, the decline was similar for both noise and single-talker maskers. Primarily target, rather than masker, asynchrony affected performance for both natural (forward) and reversed-speech maskers, and for target sentences with low and high semantic context. The results suggest that temporal coherence may not be as critical a cue for speech segregation as it is for the non-speech stimuli traditionally used in studies of auditory scene analysis.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"156 5","pages":"3285-3297"},"PeriodicalIF":2.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11575144/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142638842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}