{"title":"A blueprint for truncation resonance placement in elastic diatomic lattices with unit cell asymmetrya).","authors":"Hasan B Al Ba'ba'a, Hosam Yousef, Mostafa Nouh","doi":"10.1121/10.0027939","DOIUrl":"https://doi.org/10.1121/10.0027939","url":null,"abstract":"<p><p>Elastic periodic lattices act as mechanical filters of incident vibrations. By and large, they forbid wave propagation within bandgaps and resonate outside them. However, they often encounter \"truncation resonances\" (TRs) inside bandgaps when certain conditions are met. In this study, we show that the extent of unit cell asymmetry, its mass and stiffness contrasts, and the boundary conditions all play a role in the TR location and wave profile. The work is experimentally supported via two examples that validate the methodology, and a set of design charts is provided as a blueprint for selective TR placement in diatomic lattices.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141735897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angela Cooper, Matthew Eitel, Natalie Fecher, Elizabeth Johnson, Laura K Cirelli
{"title":"Who is singing? Voice recognition from spoken versus sung speech.","authors":"Angela Cooper, Matthew Eitel, Natalie Fecher, Elizabeth Johnson, Laura K Cirelli","doi":"10.1121/10.0026385","DOIUrl":"10.1121/10.0026385","url":null,"abstract":"<p><p>Singing is socially important but constrains voice acoustics, potentially masking certain aspects of vocal identity. Little is known about how well listeners extract talker details from sung speech or identify talkers across the sung and spoken modalities. Here, listeners (n = 149) were trained to recognize sung or spoken voices and then tested on their identification of these voices in both modalities. Learning vocal identities was initially easier through speech than song. At test, cross-modality voice recognition was above chance, but weaker than within-modality recognition. We conclude that talker information is accessible in sung speech, despite acoustic constraints in song.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141422096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Age and masking effects on acoustic cues for vowel categorizationa).","authors":"Mishaela DiNino","doi":"10.1121/10.0026371","DOIUrl":"10.1121/10.0026371","url":null,"abstract":"<p><p>Age-related changes in auditory processing may reduce physiological coding of acoustic cues, contributing to older adults' difficulty perceiving speech in background noise. This study investigated whether older adults differed from young adults in patterns of acoustic cue weighting for categorizing vowels in quiet and in noise. All participants relied primarily on spectral quality to categorize /ɛ/ and /æ/ sounds under both listening conditions. However, relative to young adults, older adults exhibited greater reliance on duration and less reliance on spectral quality. These results suggest that aging alters patterns of perceptual cue weights that may influence speech recognition abilities.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The perception and production of Korean stops in second dialect acquisition.","authors":"Hyunjung Lee, Eun Jong Kong, Jeffrey J Holliday","doi":"10.1121/10.0026374","DOIUrl":"10.1121/10.0026374","url":null,"abstract":"<p><p>This study investigated the acoustic cue weighting of the Korean stop contrast in the perception and production of speakers who moved from a nonstandard dialect region to the standard dialect region, Seoul. Through comparing these mobile speakers with data from nonmobile speakers in Seoul and their home region, it was found that the speakers shifted their cue weighting in perception and production to some degree, but also retained some subphonemic features of their home dialect in production. The implications of these results for the role of dialect prestige and awareness in second dialect acquisition are discussed.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141312448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification of phonation types in singing voice using wavelet scattering network-based features.","authors":"Kiran Reddy Mittapalle, Paavo Alku","doi":"10.1121/10.0026241","DOIUrl":"https://doi.org/10.1121/10.0026241","url":null,"abstract":"<p><p>The automatic classification of phonation types in singing voice is essential for tasks such as identification of singing style. In this study, it is proposed to use wavelet scattering network (WSN)-based features for classification of phonation types in singing voice. WSN, which has a close similarity with auditory physiological models, generates acoustic features that greatly characterize the information related to pitch, formants, and timbre. Hence, the WSN-based features can effectively capture the discriminative information across phonation types in singing voice. The experimental results show that the proposed WSN-based features improved phonation classification accuracy by at least 9% compared to state-of-the-art features.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mid-frequency acoustic tracking of breaking waves.","authors":"Ryan Saenger, Luc Lenain, William S Hodgkiss","doi":"10.1121/10.0026149","DOIUrl":"https://doi.org/10.1121/10.0026149","url":null,"abstract":"<p><p>Large surface wave breaking events in deep water are acoustically detectable by beamforming at 5-6 kHz with a mid-frequency planar array located 130 m below the surface. Due to the array's depth and modest 1 m horizontal aperture, wave breaking events cannot be tracked accurately by beamforming alone. Their trajectories are estimated instead by splitting the array into sub-arrays, beamforming each sub-array toward the source, and computing the temporal cross-correlation of the sub-array beams. Source tracks estimated from sub-array cross-correlations match the trajectories of breaking waves that are visible in aerial images of the ocean surface above the array.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brian S Miller, Cara Masere, Mark Milnes, Jaimie Cleeland, Timothy Lamb, Dale Maschette, Dirk Welsford
{"title":"Heard off Heard: Acoustic detections of sperm whales (Physeter macrocephalus) and other cetaceans off Heard Island.","authors":"Brian S Miller, Cara Masere, Mark Milnes, Jaimie Cleeland, Timothy Lamb, Dale Maschette, Dirk Welsford","doi":"10.1121/10.0026242","DOIUrl":"10.1121/10.0026242","url":null,"abstract":"<p><p>An underwater acoustic recorder was moored off Heard Island from September 2017 through March 2018 to listen for marine mammals. Analysis of data was initially conducted by visual inspection of long-term spectral averages to reveal sounds from sperm whales, Antarctic and pygmy blue whales, fin whales, minke whales, odontocete whistles, and noise from nearby ships. Automated detection of sperm whale clicks revealed they were seldom detected from September through January (n = 35 h) but were detected nearly every day of February and March (n = 684 h). Additional analysis of these detections revealed further diel and demographic patterns.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141452336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint equalization and decoding with per-survivor processing based on super-trellis in time-varying underwater acoustic channels.","authors":"Xu Kou, Yanbo Wu, Min Zhu","doi":"10.1121/10.0026372","DOIUrl":"https://doi.org/10.1121/10.0026372","url":null,"abstract":"<p><p>This Letter proposes a low-complexity joint equalization and decoding reception scheme based on super-trellis per-survivor processing, making it possible to apply maximum likelihood sequence estimation in high-order underwater acoustic communications under fast time-varying channels. The technique combines trellis-coded modulation states and intersymbol interference states and uses per-survivor processing to track channel parameters. Furthermore, a general trellis configuration for arbitrary order quadrature amplitude modulation signal is provided when truncate the channel is used to describe the intersymbol interference state to 1. Sea trials results show that the performance of proposed method can be more than 1.4 dB superiority than conventional schemes.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xueli Sheng, Dewen Li, Ran Cao, Xuan Zhou, Jiarui Yin
{"title":"Adaptive interference suppression based on an invariant subspace of matrices matching for a horizontal array in underwater acoustics.","authors":"Xueli Sheng, Dewen Li, Ran Cao, Xuan Zhou, Jiarui Yin","doi":"10.1121/10.0026373","DOIUrl":"https://doi.org/10.1121/10.0026373","url":null,"abstract":"<p><p>Passive detection of target-of-interest (TOI) within strong interferences poses a challenge. This paper introduces an adaptive interference suppression based on an invariant subspace of matrix matching. Assume that the TOI-bearing intervals are known. We define a correlation ratio for each eigenvector to obtain the highest one. Then, we use invariant subspace of matrix matching to measure the distance between two invariant projection matrices of this eigenvector. This identifies and removes the eigenvectors associated with TOI. Finally, the remaining eigenvectors are subtracted from the sample covariance matrix to suppress interference and noise. The viability of the proposed method is demonstrated experimentally.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141312447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Static and moving minimum audible angle: Independent contributions of reverberation and position.","authors":"Anna Dietze, Samuel W Clapp, Bernhard U Seeber","doi":"10.1121/10.0025992","DOIUrl":"https://doi.org/10.1121/10.0025992","url":null,"abstract":"<p><p>Two measures of auditory spatial resolution, the minimum audible angle and the minimum audible movement angle, have been obtained in a simulated acoustic environment using Ambisonics sound field reproduction. Trajectories were designed to provide no reliable cues for the spatial discrimination task. Larger threshold angles were found in reverberant compared to anechoic conditions, for stimuli on the side compared to the front, and for moving compared to static stimuli. The effect of reverberation appeared to be independent of the position of the sound source (same relative threshold increase) and was independently present for static and moving sound sources.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140923846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}