Journal of the Acoustical Society of Korea最新文献

筛选
英文 中文
A quantitative analysis of synthetic aperture sonar image distortion according to sonar platform motion parameters 根据声纳平台运动参数定量分析合成孔径声纳图像畸变
IF 0.4
Journal of the Acoustical Society of Korea Pub Date : 2021-07-01 DOI: 10.7776/ASK.2021.40.4.382
Sea-Moon Kim and Sung-Hoon Byun
{"title":"A quantitative analysis of synthetic aperture sonar image distortion according to sonar platform motion parameters","authors":"Sea-Moon Kim and Sung-Hoon Byun","doi":"10.7776/ASK.2021.40.4.382","DOIUrl":"https://doi.org/10.7776/ASK.2021.40.4.382","url":null,"abstract":"Synthetic aperture sonars as well as side scan sonars or multibeam echo sounders have been commercialized and are widely used for seafloor imaging. In Korea related research such as the development of a towed synthetic aperture sonar system is underway. In order to obtain high-resolution synthetic aperture sonar images, it is necessary to accurately estimate the platform motion on which it is installed, and a precise underwater navigation system is required. In this paper we are going to provide reference data for determining the required navigation accuracy and precision of navigation sensors by quantitatively analyzing how much distortion of the sonar images occurs according to motion characteristics of the platform equipped with the synthetic aperture sonar. Five types of motions are considered and normalized root mean square error is defined for quantitative analysis. Simulation for error analysis with parameter variation of motion characteristics results in that yaw and sway motion causes the largest image distortion whereas the effect of pitch and heave motion is not significant.","PeriodicalId":42689,"journal":{"name":"Journal of the Acoustical Society of Korea","volume":"40 1","pages":"382-390"},"PeriodicalIF":0.4,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41804028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An explorative study on the perceived emotion of music: according to cognitive styles of music listening 音乐感知情感的探索性研究:基于音乐聆听的认知风格
IF 0.4
Journal of the Acoustical Society of Korea Pub Date : 2021-07-01 DOI: 10.7776/ASK.2021.40.4.290
Jin Hee Choi and Hyun Ju Chong
{"title":"An explorative study on the perceived emotion of music: according to cognitive styles of music listening","authors":"Jin Hee Choi and Hyun Ju Chong","doi":"10.7776/ASK.2021.40.4.290","DOIUrl":"https://doi.org/10.7776/ASK.2021.40.4.290","url":null,"abstract":"The purpose of this study was to examine the perceived emotion of music according to cognitive styles of music listening. A total of 91 music-related graduate students participated in this study. They were given a questionnaire about perceived emotions of music, musical elements, and Music Empathizing-Music Systemizing Inventory. To analyze statistically, Descriptive statistics, paired t-test, ANalysis Of VAriance (ANOVA), multivariate analysis, and Pearson correlation analysis were conducted. Results showed that participants had relatively universal experience in perceived emotions of both types of music, and also showed that musical elements contributed to the experience differed by cognitive styles of music listening.","PeriodicalId":42689,"journal":{"name":"Journal of the Acoustical Society of Korea","volume":"40 1","pages":"290-296"},"PeriodicalIF":0.4,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49035354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measurements of mid-frequency transmission loss in shallow waters off the East Sea: Comparison with Rayleigh reflection model and high-frequency bottom loss model 东海浅海中频传输损耗的测量:与瑞利反射模型和高频底损耗模型的比较
IF 0.4
Journal of the Acoustical Society of Korea Pub Date : 2021-07-01 DOI: 10.7776/ASK.2021.40.4.297
D. Lee, Raegeun Oh, J. Choi, Seongil Kim, Hyuckjong Kwon
{"title":"Measurements of mid-frequency transmission loss in shallow waters off the East Sea: Comparison with Rayleigh reflection model and high-frequency bottom loss model","authors":"D. Lee, Raegeun Oh, J. Choi, Seongil Kim, Hyuckjong Kwon","doi":"10.7776/ASK.2021.40.4.297","DOIUrl":"https://doi.org/10.7776/ASK.2021.40.4.297","url":null,"abstract":"When sound waves propagate over long distances in shallow water, measured transmission loss is greater than predicted one using underwater acoustic model with the Rayleigh reflection model due to inhomogeneity of the bottom. Accordingly, the US Navy predicts sound wave propagation by applying the empirical formula-based High Frequency Bottom Loss (HFBL) model. In this study, the measurement and analysis of transmission loss was conducted using mid-frequency (2.3 kHz, 3 kHz) in the shallow water of the East Sea in summer. BELLHOP eigenray tracing output shows that only sound waves with lower grazing angle than the critical angle propagate long distances for several kilometers or more, and the difference between the predicted transmission loss based on the Rayleigh reflection model and the measured transmission loss tend to increase along the propagation range. By comparing the Rayleigh reflection model and the HFBL model at the high grazing angle region, the bottom province, the input value of the HFBL model, is estimated and BELLHOP transmission loss with HFBL model is compared to measured transmission loss. As a result, it agrees well with the measurements of transmission loss.","PeriodicalId":42689,"journal":{"name":"Journal of the Acoustical Society of Korea","volume":"40 1","pages":"297-303"},"PeriodicalIF":0.4,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45128356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segment unit shuffling layer in deep neural networks for text-independent speaker verification 深度神经网络中用于文本无关说话人验证的分段单元混洗层
IF 0.4
Journal of the Acoustical Society of Korea Pub Date : 2021-03-01 DOI: 10.7776/ASK.2021.40.2.148
Ju-Sung Heo, Hye-jin Shim, Ju-ho Kim, Ha-jin Yu
{"title":"Segment unit shuffling layer in deep neural networks for text-independent speaker verification","authors":"Ju-Sung Heo, Hye-jin Shim, Ju-ho Kim, Ha-jin Yu","doi":"10.7776/ASK.2021.40.2.148","DOIUrl":"https://doi.org/10.7776/ASK.2021.40.2.148","url":null,"abstract":"Text-Independent speaker verification needs to extract text-independent speaker embedding to improve generalization performance. However, deep neural networks that depend on training data have the potential to overfit text information instead of learning the speaker information when repeatedly learning from the identical time series. In this paper, to prevent the overfitting, we propose a segment unit shuffling layer that divides and rearranges the input layer or a hidden layer along the time axis, thus mixes the time series information. Since the segment unit shuffling layer can be applied not only to the input layer but also to the hidden layers, it can be used as generalization technique in the hidden layer, which is known to be effective compared to the generalization technique in the input layer, and can be applied simultaneously with data augmentation. In addition, the degree of distortion can be adjusted by adjusting the unit size of the segment. We observe that the performance of text-independent speaker verification is improved compared to the baseline when the proposed segment unit shuffling layer is applied.","PeriodicalId":42689,"journal":{"name":"Journal of the Acoustical Society of Korea","volume":"40 1","pages":"148-154"},"PeriodicalIF":0.4,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47949865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Side scan sonar image super-resolution using an improved initialization structure 采用改进初始化结构的侧扫声纳图像超分辨率
IF 0.4
Journal of the Acoustical Society of Korea Pub Date : 2021-03-01 DOI: 10.7776/ASK.2021.40.2.121
Junyeop Lee, Bonhwa Ku, Wanjin Kim, Hanseok Ko
{"title":"Side scan sonar image super-resolution using an improved initialization structure","authors":"Junyeop Lee, Bonhwa Ku, Wanjin Kim, Hanseok Ko","doi":"10.7776/ASK.2021.40.2.121","DOIUrl":"https://doi.org/10.7776/ASK.2021.40.2.121","url":null,"abstract":"This paper deals with a super-resolution that improves the resolution of side scan sonar images using learning-based compressive sensing. Learning-based compressive sensing combined with deep learning and compressive sensing takes a structure of a feed-forward network and parameters are set automatically through learning. In particular, we propose a method that can effectively extract additional information required in the super-resolution process through various initialization methods. Representative experimental results show that the proposed method provides improved performance in terms of Peak Signal-to-Noise Ratio (PSNR) and Structure Similarity Index Measure (SSIM) than conventional methods.","PeriodicalId":42689,"journal":{"name":"Journal of the Acoustical Society of Korea","volume":"40 1","pages":"121-129"},"PeriodicalIF":0.4,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46567183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust data association gate method of non-linear target tracking in dense cluttered environment 密集杂波环境下非线性目标跟踪的鲁棒数据关联门方法
IF 0.4
Journal of the Acoustical Society of Korea Pub Date : 2021-03-01 DOI: 10.7776/ASK.2021.40.2.109
Seong-Weon Kim, Taek-ik Kwon, Hyeon‑Deok Cho
{"title":"A robust data association gate method of non-linear target tracking in dense cluttered environment","authors":"Seong-Weon Kim, Taek-ik Kwon, Hyeon‑Deok Cho","doi":"10.7776/ASK.2021.40.2.109","DOIUrl":"https://doi.org/10.7776/ASK.2021.40.2.109","url":null,"abstract":"","PeriodicalId":42689,"journal":{"name":"Journal of the Acoustical Society of Korea","volume":"40 1","pages":"109-120"},"PeriodicalIF":0.4,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41790298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of deep learning-based holographic ultrasound generation algorithm 基于深度学习的全息超声生成算法的开发
IF 0.4
Journal of the Acoustical Society of Korea Pub Date : 2021-03-01 DOI: 10.7776/ASK.2021.40.2.169
Moon Hwan Lee and Jae Youn Hwang
{"title":"Development of deep learning-based holographic ultrasound generation algorithm","authors":"Moon Hwan Lee and Jae Youn Hwang","doi":"10.7776/ASK.2021.40.2.169","DOIUrl":"https://doi.org/10.7776/ASK.2021.40.2.169","url":null,"abstract":"Recently, an ultrasound hologram and its applications have gained attention in the ultrasound research field. However, the determination technique of transmit signal phases, which generate a hologram, has not been significantly advanced from the previous algorithms which are time-consuming iterative methods. Thus, we applied the deep learning technique, which has been previously adopted to generate an optical hologram, to generate an ultrasound hologram. We further examined the Deep learning-based Holographic Ultrasound Generation algorithm (Deep-HUG). We implement the U-Net-based algorithm and examine its generalizability by training on a dataset, which consists of randomly distributed disks, and testing on the alphabets (A-Z). Furthermore, we compare the Deep-HUG with the previous algorithm in terms of computation time, accuracy, and uniformity. It was found that the accuracy and uniformity of the Deep-HUG are somewhat lower than those of the previous algorithm whereas the computation time is 190 times faster than that of the previous algorithm, demonstrating that Deep-HUG has potential as a useful technique to rapidly generate an ultrasound hologram for various applications.","PeriodicalId":42689,"journal":{"name":"Journal of the Acoustical Society of Korea","volume":"40 1","pages":"169-175"},"PeriodicalIF":0.4,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48661920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance analysis of weakly-supervised sound event detection system based on the mean-teacher convolutional recurrent neural network model 基于均值-教师卷积递归神经网络模型的弱监督声音事件检测系统性能分析
IF 0.4
Journal of the Acoustical Society of Korea Pub Date : 2021-03-01 DOI: 10.7776/ASK.2021.40.2.139
Seokjin Lee
{"title":"Performance analysis of weakly-supervised sound event detection system based on the mean-teacher convolutional recurrent neural network model","authors":"Seokjin Lee","doi":"10.7776/ASK.2021.40.2.139","DOIUrl":"https://doi.org/10.7776/ASK.2021.40.2.139","url":null,"abstract":"This paper introduces and implements a Sound Event Detection (SED) system based on weaklysupervised learning where only part of the data is labeled, and analyzes the effect of parameters. The SED system estimates the classes and onset/offset times of events in the acoustic signal. In order to train the model, all information on the event class and onset/offset times must be provided. Unfortunately, the onset/offset times are hard to be labeled exactly. Therefore, in the weakly-supervised task, the SED model is trained by “strongly labeled data” including the event class and activations, “weakly labeled data” including the event class, and “unlabeled data” without any label. Recently, the SED systems using the mean-teacher model are widely used for the task with several parameters. These parameters should be chosen carefully because they may affect the performance. In this paper, performance analysis was performed on parameters, such as the feature, moving average parameter, weight of the consistency cost function, ramp-up length, and maximum learning rate, using the data of DCASE 2020 Task 4. Effects and the optimal values of the parameters were discussed.","PeriodicalId":42689,"journal":{"name":"Journal of the Acoustical Society of Korea","volume":"40 1","pages":"139-147"},"PeriodicalIF":0.4,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42050357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of false alarm possibility using simulation of back-scattering signals from water masses 用模拟水团后向散射信号分析虚警可能性
IF 0.4
Journal of the Acoustical Society of Korea Pub Date : 2021-03-01 DOI: 10.7776/ASK.2021.40.2.099
Yonghoon Ha
{"title":"Analysis of false alarm possibility using simulation of back-scattering signals from water masses","authors":"Yonghoon Ha","doi":"10.7776/ASK.2021.40.2.099","DOIUrl":"https://doi.org/10.7776/ASK.2021.40.2.099","url":null,"abstract":"In this paper numerical wave propagation experiments have been performed to visually confirm whether the signals scattered by water masses can be a false alarm in active sonar. The numerical environments consist of exaggerated water masses as targets in free space. Using a pseudospectral time-domain model for irregular boundary, the back-scattered signals have been calculated and compared with analytic solutions. Also, the sound propagation was simulated. Consequently, it was verified that water masses themselves could not be detected as a false target.","PeriodicalId":42689,"journal":{"name":"Journal of the Acoustical Society of Korea","volume":"40 1","pages":"99-108"},"PeriodicalIF":0.4,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44425605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Snoring sound detection method using attention-based convolutional bidirectional gated recurrent unit 基于注意的卷积双向门控循环单元的鼾声检测方法
IF 0.4
Journal of the Acoustical Society of Korea Pub Date : 2021-03-01 DOI: 10.7776/ASK.2021.40.2.155
Min-soo Kim, Gi Yong Lee, Hyoung‐Gook Kim
{"title":"Snoring sound detection method using attention-based convolutional bidirectional gated recurrent unit","authors":"Min-soo Kim, Gi Yong Lee, Hyoung‐Gook Kim","doi":"10.7776/ASK.2021.40.2.155","DOIUrl":"https://doi.org/10.7776/ASK.2021.40.2.155","url":null,"abstract":"This paper proposes an automatic method for detecting snore sound, one of the important symptoms of sleep apnea patients. In the proposed method, sound signals generated during sleep are input to detect a sound generation section, and a spectrogram transformed from the detected sound section is applied to a classifier based on a convolutional bidirectional gated recurrent unit (CBGRU) with attention mechanism. The applied attention mechanism improved the snoring sound detection performance by extending the CBGRU model to learn discriminative feature representation for the snoring detection. The experimental results show that the proposed snoring detection method improves the accuracy by approximately 3.1 % ~ 5.5 % than existing method.","PeriodicalId":42689,"journal":{"name":"Journal of the Acoustical Society of Korea","volume":"40 1","pages":"155-160"},"PeriodicalIF":0.4,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42589155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信