{"title":"The perceptual distinctiveness of the [n-l] contrast in different vowel and tonal contexts.","authors":"Pauline Bolin Liu, Mingxing Li","doi":"10.1121/10.0034196","DOIUrl":"10.1121/10.0034196","url":null,"abstract":"<p><p>This study investigates the relative perceptual distinction of the [n] vs [l] contrast in different vowel contexts ([_a] vs [_i]) and tonal contexts (high-initial such as HH, HL, vs low-initial such as LL, LH). The results of two speeded AX discrimination experiments indicated that a [n-l] contrast is perceptually more distinct in the [_a] context and with a high-initial tone. The results are consistent with the typology of the [n] vs [l] contrast across Chinese dialects, which is more frequently observed in the [_a] context and with a high-initial tone, supporting a connection between phonological typology and perceptual distinctiveness.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 11","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech.","authors":"Melissa J Polonenko, Ross K Maddox","doi":"10.1121/10.0034329","DOIUrl":"10.1121/10.0034329","url":null,"abstract":"<p><p>Deriving human neural responses to natural speech is now possible, but the responses to male- and female-uttered speech have been shown to differ. These talker differences may complicate interpretations or restrict experimental designs geared toward more realistic communication scenarios. This study found that when a male talker and a female talker had the same fundamental frequency, auditory brainstem responses (ABRs) were very similar. Those responses became smaller and later with increasing fundamental frequency, as did click ABRs with increasing stimulus rates. Modeled responses suggested that the speech and click ABR differences were reasonably predicted by peripheral and brainstem processing of stimulus acoustics.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 11","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11558516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142592457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating audio quality ratings and scene analysis performance of hearing-impaired listeners for multi-track music.","authors":"Aravindan Joseph Benjamin, Kai Siedenburg","doi":"10.1121/10.0032474","DOIUrl":"10.1121/10.0032474","url":null,"abstract":"<p><p>This study assessed musical scene analysis (MSA) performance and subjective quality ratings of multi-track mixes as a function of spectral manipulations using the EQ-transform (% EQT). This transform exaggerates or reduces the spectral shape changes in a given track with respect to a relatively flat, smooth reference spectrum. Data from 30 younger normal hearing (yNH) and 23 older hearing-impaired (oHI) participants showed that MSA performance was robust to changes in % EQT. However, audio quality ratings elicited from yNH participants were more sensitive to % EQT than those of oHI participants. A significant positive correlation between MSA performance and quality ratings among oHI showed that oHI participants with better MSA performances gave higher-quality ratings, whereas there was no significant correlation for yNH listeners. Overall, these data indicate the complementary virtue of measures of MSA and audio quality ratings for assessing the suitability of music mixes for hearing-impaired listeners.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 11","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142633867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scott Loranger, Brendan DeCourcy, Weifeng Gordon Zhang, Ying-Tsong Lin, Andone Lavery
{"title":"High-resolution acoustically informed maps of sound speed.","authors":"Scott Loranger, Brendan DeCourcy, Weifeng Gordon Zhang, Ying-Tsong Lin, Andone Lavery","doi":"10.1121/10.0032475","DOIUrl":"https://doi.org/10.1121/10.0032475","url":null,"abstract":"<p><p>As oceanographic models advance in complexity, accuracy, and resolution, in situ measurements must provide spatiotemporal information with sufficient resolution to inform and validate those models. In this study, water masses at the New England shelf break were mapped using scientific echosounders combined with water column property measurements from a single conductivity, temperature, and depth (CTD) profile. The acoustically-inferred map of sound speed was compared with a sound speed cross section based on two-dimensional interpolation of multiple CTD profiles. Long-range acoustic propagation models were then parameterized by the sound speed profiles estimated by the two methods and differences were compared.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 10","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nadège Aoki, Benjamin Weiss, Youenn Jézéquel, Amy Apprill, T Aran Mooney
{"title":"Replayed reef sounds induce settlement of Favia fragum coral larvae in aquaria and field environmentsa).","authors":"Nadège Aoki, Benjamin Weiss, Youenn Jézéquel, Amy Apprill, T Aran Mooney","doi":"10.1121/10.0032407","DOIUrl":"10.1121/10.0032407","url":null,"abstract":"<p><p>Acoustic cues of healthy reefs are known to support critical settlement behaviors for one reef-building coral, but acoustic responses have not been demonstrated in additional species. Settlement of Favia fragum larvae in response to replayed coral reef soundscapes were observed by exposing larvae in aquaria and reef settings to playback sound treatments for 24-72 h. Settlement increased under 24 h sound treatments in both experiments. The results add to growing knowledge that acoustically mediated settlement may be widespread among stony corals with species-specific attributes, suggesting sound could be one tool employed to rehabilitate and build resilience within imperiled reef communities.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 10","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beam-space spatial spectrum reconstruction under unknown stationary near-field interference: Algorithm design and experimental verification.","authors":"Jichen Chu, Lei Cheng, Wen Xu","doi":"10.1121/10.0030334","DOIUrl":"https://doi.org/10.1121/10.0030334","url":null,"abstract":"<p><p>In acoustic array signal processing, spatial spectrum estimation and the corresponding direction-of-arrival estimation are sometimes affected by stationary near-field interferences, presenting a considerable challenge for the target detection. To address the challenge, this paper proposes a beam-space spatial spectrum reconstruction algorithm. The proposed algorithm overcomes the limitations of common spatial spectrum estimation algorithms designed for near-field interference scenarios, which require knowledge of the near-field interference array manifold. The robustness and efficacy of the proposed algorithm under strong stationary near-field interference are confirmed through the analysis of simulated and real-life experimental data.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 10","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142360734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tests of human auditory temporal resolution: Simulations of Bayesian threshold estimation for auditory gap detection.","authors":"Shuji Mori, Yuto Murata, Takashi Morimoto, Yasuhide Okamoto, Sho Kanzaki","doi":"10.1121/10.0028501","DOIUrl":"10.1121/10.0028501","url":null,"abstract":"<p><p>In an attempt to develop tests of auditory temporal resolution using gap detection, we conducted computer simulations of Zippy Estimation by Sequential Testing (ZEST), an adaptive Bayesian threshold estimation procedure, for measuring gap detection thresholds. The results showed that the measures of efficiency and precision of ZEST changed with the mean and standard deviation (SD) of the initial probability density function implemented in ZEST. Appropriate combinations of mean and SD values led to efficient ZEST performance; i.e., the threshold estimates converged to their true values after 10 to 15 trials.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 9","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christophe Lesimple, Volker Kuehnel, Kai Siedenburg
{"title":"Hearing aid evaluation for music: Accounting for acoustical variability of music stimuli.","authors":"Christophe Lesimple, Volker Kuehnel, Kai Siedenburg","doi":"10.1121/10.0028397","DOIUrl":"10.1121/10.0028397","url":null,"abstract":"<p><p>Music is an important signal class for hearing aids, and musical genre is often used as a descriptor for stimulus selection. However, little research has systematically investigated the acoustical properties of musical genres with respect to hearing aid amplification. Here, extracts from a combination of two comprehensive music databases were acoustically analyzed. Considerable overlap in acoustic descriptor space between genres emerged. By simulating hearing aid processing, it was shown that effects of amplification regarding dynamic range compression and spectral weighting differed across musical genres, underlining the critical role of systematic stimulus selection for research on music and hearing aids.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 9","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing human and machine speech recognition in noise with QuickSIN.","authors":"Malcolm Slaney, Matthew B Fitzgerald","doi":"10.1121/10.0028612","DOIUrl":"10.1121/10.0028612","url":null,"abstract":"<p><p>A test is proposed to characterize the performance of speech recognition systems. The QuickSIN test is used by audiologists to measure the ability of humans to recognize continuous speech in noise. This test yields the signal-to-noise ratio at which individuals can correctly recognize 50% of the keywords in low-context sentences. It is argued that a metric for automatic speech recognizers will ground the performance of automatic speech-in-noise recognizers to human abilities. Here, it is demonstrated that the performance of modern recognizers, built using millions of hours of unsupervised training data, is anywhere from normal to mildly impaired in noise compared to human participants.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 9","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A method for calculating the grand average of a set of auditory brain-stem responses.","authors":"Sinnet G B Kristensen, Claus Elberling","doi":"10.1121/10.0028320","DOIUrl":"10.1121/10.0028320","url":null,"abstract":"<p><p>To calculate a grand average waveform for a set of auditory brain-stem responses (ABRs), no generally accepted method exists. Here, we evaluate a new method using temporal adjustment of the underlying ABRs. Compared to a method without temporal adjustment, the new method results in higher amplitudes of the individual waves in the grand average. The grand average produced by the method better represents the group mean wave-amplitudes because it reduces smearing of the individual waves caused by inter-subject latency variability.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":"4 9","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}