Michael Neri, Archontis Politis, Daniel Krause, Marco Carli, Tuomas Virtanen
{"title":"Speaker Distance Estimation in Enclosures from Single-Channel Audio","authors":"Michael Neri, Archontis Politis, Daniel Krause, Marco Carli, Tuomas Virtanen","doi":"arxiv-2403.17514","DOIUrl":null,"url":null,"abstract":"Distance estimation from audio plays a crucial role in various applications,\nsuch as acoustic scene analysis, sound source localization, and room modeling.\nMost studies predominantly center on employing a classification approach, where\ndistances are discretized into distinct categories, enabling smoother model\ntraining and achieving higher accuracy but imposing restrictions on the\nprecision of the obtained sound source position. Towards this direction, in\nthis paper we propose a novel approach for continuous distance estimation from\naudio signals using a convolutional recurrent neural network with an attention\nmodule. The attention mechanism enables the model to focus on relevant temporal\nand spectral features, enhancing its ability to capture fine-grained\ndistance-related information. To evaluate the effectiveness of our proposed\nmethod, we conduct extensive experiments using audio recordings in controlled\nenvironments with three levels of realism (synthetic room impulse response,\nmeasured response with convolved speech, and real recordings) on four datasets\n(our synthetic dataset, QMULTIMIT, VoiceHome-2, and STARSS23). Experimental\nresults show that the model achieves an absolute error of 0.11 meters in a\nnoiseless synthetic scenario. Moreover, the results showed an absolute error of\nabout 1.30 meters in the hybrid scenario. The algorithm's performance in the\nreal scenario, where unpredictable environmental factors and noise are\nprevalent, yields an absolute error of approximately 0.50 meters. For\nreproducible research purposes we make model, code, and synthetic datasets\navailable at https://github.com/michaelneri/audio-distance-estimation.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.17514","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Distance estimation from audio plays a crucial role in various applications,
such as acoustic scene analysis, sound source localization, and room modeling.
Most studies predominantly center on employing a classification approach, where
distances are discretized into distinct categories, enabling smoother model
training and achieving higher accuracy but imposing restrictions on the
precision of the obtained sound source position. Towards this direction, in
this paper we propose a novel approach for continuous distance estimation from
audio signals using a convolutional recurrent neural network with an attention
module. The attention mechanism enables the model to focus on relevant temporal
and spectral features, enhancing its ability to capture fine-grained
distance-related information. To evaluate the effectiveness of our proposed
method, we conduct extensive experiments using audio recordings in controlled
environments with three levels of realism (synthetic room impulse response,
measured response with convolved speech, and real recordings) on four datasets
(our synthetic dataset, QMULTIMIT, VoiceHome-2, and STARSS23). Experimental
results show that the model achieves an absolute error of 0.11 meters in a
noiseless synthetic scenario. Moreover, the results showed an absolute error of
about 1.30 meters in the hybrid scenario. The algorithm's performance in the
real scenario, where unpredictable environmental factors and noise are
prevalent, yields an absolute error of approximately 0.50 meters. For
reproducible research purposes we make model, code, and synthetic datasets
available at https://github.com/michaelneri/audio-distance-estimation.