Nadège Aoki, Benjamin Weiss, Youenn Jézéquel, Amy Apprill, T Aran Mooney
{"title":"Replayed reef sounds induce settlement of Favia fragum coral larvae in aquaria and field environmentsa).","authors":"Nadège Aoki, Benjamin Weiss, Youenn Jézéquel, Amy Apprill, T Aran Mooney","doi":"10.1121/10.0032407","DOIUrl":"https://doi.org/10.1121/10.0032407","url":null,"abstract":"<p><p>Acoustic cues of healthy reefs are known to support critical settlement behaviors for one reef-building coral, but acoustic responses have not been demonstrated in additional species. Settlement of Favia fragum larvae in response to replayed coral reef soundscapes were observed by exposing larvae in aquaria and reef settings to playback sound treatments for 24-72 h. Settlement increased under 24 h sound treatments in both experiments. The results add to growing knowledge that acoustically mediated settlement may be widespread among stony corals with species-specific attributes, suggesting sound could be one tool employed to rehabilitate and build resilience within imperiled reef communities.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 10","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beam-space spatial spectrum reconstruction under unknown stationary near-field interference: Algorithm design and experimental verification.","authors":"Jichen Chu, Lei Cheng, Wen Xu","doi":"10.1121/10.0030334","DOIUrl":"https://doi.org/10.1121/10.0030334","url":null,"abstract":"<p><p>In acoustic array signal processing, spatial spectrum estimation and the corresponding direction-of-arrival estimation are sometimes affected by stationary near-field interferences, presenting a considerable challenge for the target detection. To address the challenge, this paper proposes a beam-space spatial spectrum reconstruction algorithm. The proposed algorithm overcomes the limitations of common spatial spectrum estimation algorithms designed for near-field interference scenarios, which require knowledge of the near-field interference array manifold. The robustness and efficacy of the proposed algorithm under strong stationary near-field interference are confirmed through the analysis of simulated and real-life experimental data.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 10","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142360734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tests of human auditory temporal resolution: Simulations of Bayesian threshold estimation for auditory gap detection.","authors":"Shuji Mori, Yuto Murata, Takashi Morimoto, Yasuhide Okamoto, Sho Kanzaki","doi":"10.1121/10.0028501","DOIUrl":"10.1121/10.0028501","url":null,"abstract":"<p><p>In an attempt to develop tests of auditory temporal resolution using gap detection, we conducted computer simulations of Zippy Estimation by Sequential Testing (ZEST), an adaptive Bayesian threshold estimation procedure, for measuring gap detection thresholds. The results showed that the measures of efficiency and precision of ZEST changed with the mean and standard deviation (SD) of the initial probability density function implemented in ZEST. Appropriate combinations of mean and SD values led to efficient ZEST performance; i.e., the threshold estimates converged to their true values after 10 to 15 trials.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 9","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christophe Lesimple, Volker Kuehnel, Kai Siedenburg
{"title":"Hearing aid evaluation for music: Accounting for acoustical variability of music stimuli.","authors":"Christophe Lesimple, Volker Kuehnel, Kai Siedenburg","doi":"10.1121/10.0028397","DOIUrl":"10.1121/10.0028397","url":null,"abstract":"<p><p>Music is an important signal class for hearing aids, and musical genre is often used as a descriptor for stimulus selection. However, little research has systematically investigated the acoustical properties of musical genres with respect to hearing aid amplification. Here, extracts from a combination of two comprehensive music databases were acoustically analyzed. Considerable overlap in acoustic descriptor space between genres emerged. By simulating hearing aid processing, it was shown that effects of amplification regarding dynamic range compression and spectral weighting differed across musical genres, underlining the critical role of systematic stimulus selection for research on music and hearing aids.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 9","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing human and machine speech recognition in noise with QuickSIN.","authors":"Malcolm Slaney, Matthew B Fitzgerald","doi":"10.1121/10.0028612","DOIUrl":"10.1121/10.0028612","url":null,"abstract":"<p><p>A test is proposed to characterize the performance of speech recognition systems. The QuickSIN test is used by audiologists to measure the ability of humans to recognize continuous speech in noise. This test yields the signal-to-noise ratio at which individuals can correctly recognize 50% of the keywords in low-context sentences. It is argued that a metric for automatic speech recognizers will ground the performance of automatic speech-in-noise recognizers to human abilities. Here, it is demonstrated that the performance of modern recognizers, built using millions of hours of unsupervised training data, is anywhere from normal to mildly impaired in noise compared to human participants.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 9","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A method for calculating the grand average of a set of auditory brain-stem responses.","authors":"Sinnet G B Kristensen, Claus Elberling","doi":"10.1121/10.0028320","DOIUrl":"10.1121/10.0028320","url":null,"abstract":"<p><p>To calculate a grand average waveform for a set of auditory brain-stem responses (ABRs), no generally accepted method exists. Here, we evaluate a new method using temporal adjustment of the underlying ABRs. Compared to a method without temporal adjustment, the new method results in higher amplitudes of the individual waves in the grand average. The grand average produced by the method better represents the group mean wave-amplitudes because it reduces smearing of the individual waves caused by inter-subject latency variability.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 9","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyeonjeong Park, Fangxu Xing, Maureen Stone, Hahn Kang, Xiaofeng Liu, Jiachen Zhuo, Sidney Fels, Timothy G Reese, Van J Wedeen, Georges El Fakhri, Jerry L Prince, Jonghye Woo
{"title":"Investigating muscle coordination patterns with Granger causality analysis in protrusive motion from tagged and diffusion MRI.","authors":"Hyeonjeong Park, Fangxu Xing, Maureen Stone, Hahn Kang, Xiaofeng Liu, Jiachen Zhuo, Sidney Fels, Timothy G Reese, Van J Wedeen, Georges El Fakhri, Jerry L Prince, Jonghye Woo","doi":"10.1121/10.0028500","DOIUrl":"10.1121/10.0028500","url":null,"abstract":"<p><p>The human tongue exhibits an orchestrated arrangement of internal muscles, working in sequential order to execute tongue movements. Understanding the muscle coordination patterns involved in tongue protrusive motion is crucial for advancing knowledge of tongue structure and function. To achieve this, this work focuses on five muscles known to contribute to protrusive motion. Tagged and diffusion MRI data are collected for analysis of muscle fiber geometry and motion patterns. Lagrangian strain measurements are derived, and Granger causal analysis is carried out to assess predictive information among the muscles. Experimental results suggest sequential muscle coordination of protrusive motion among distinct muscle groups.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 9","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11384280/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142141922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Monaural and binaural masking release with speech-like stimuli.","authors":"Hyojin Kim, Viktorija Ratkute, Bastian Epp","doi":"10.1121/10.0028736","DOIUrl":"10.1121/10.0028736","url":null,"abstract":"<p><p>The relevance of comodulation and interaural phase difference for speech perception is still unclear. We used speech-like stimuli to link spectro-temporal properties of formants with masking release. The stimuli comprised a tone and three masker bands centered at formant frequencies F1, F2, and F3 derived from a consonant-vowel. The target was a diotic or dichotic frequency-modulated tone following F2 trajectories. Results showed a small comodulation masking release, while the binaural masking level difference was comparable to previous findings. The data suggest that factors other than comodulation may play a dominant role in grouping frequency components in speech.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 9","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua G W Bernstein, Julianna Voelker, Sandeep A Phatak
{"title":"Headphones over the cochlear-implant sound processor to replace direct audio input.","authors":"Joshua G W Bernstein, Julianna Voelker, Sandeep A Phatak","doi":"10.1121/10.0028737","DOIUrl":"10.1121/10.0028737","url":null,"abstract":"<p><p>Psychoacoustic stimulus presentation to the cochlear implant via direct audio input (DAI) is no longer possible for many newer sound processors (SPs). This study assessed the feasibility of placing circumaural headphones over the SP. Calibration spectra for loudspeaker, DAI, and headphone modalities were estimated by measuring cochlear-implant electrical output levels for tones presented to SPs on an acoustic manikin. Differences in calibration spectra between modalities arose mainly from microphone-response characteristics (high-frequency differences between DAI and the other modalities) or a proximity effect (low-frequency differences between headphones and loudspeaker). Calibration tables are provided to adjust for differences between the three modalities.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 9","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Individual variability in the use of tonal and non-tonal cues in intonationa).","authors":"Na Hu, Amalia Arvaniti","doi":"10.1121/10.0028613","DOIUrl":"10.1121/10.0028613","url":null,"abstract":"<p><p>Greek uses H*, L + H*, and H* + L, all followed by L-L% edge tones, as nuclear pitch accents in statements. A previous analysis demonstrated that these accents are distinguished by F0 scaling and contour shape. This study expands the earlier investigation by exploring additional cues, namely, voice quality, amplitude, and duration, in distinguishing the pitch accents, and investigating individual variability in the selection of both F0 and non-F0 cues. Bayesian multivariate analysis and hierarchical clustering demonstrate that the accents are distinguished not only by F0 but also by additional cues at the group level, with individual variability in cue selection.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 9","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}