Neural Ambisonic Encoding For Multi-Speaker Scenarios Using A Circular Microphone Array

Yue Qiao, Vinay Kothapally, Meng Yu, Dong Yu
{"title":"Neural Ambisonic Encoding For Multi-Speaker Scenarios Using A Circular Microphone Array","authors":"Yue Qiao, Vinay Kothapally, Meng Yu, Dong Yu","doi":"arxiv-2409.06954","DOIUrl":null,"url":null,"abstract":"Spatial audio formats like Ambisonics are playback device layout-agnostic and\nwell-suited for applications such as teleconferencing and virtual reality.\nConventional Ambisonic encoding methods often rely on spherical microphone\narrays for efficient sound field capture, which limits their flexibility in\npractical scenarios. We propose a deep learning (DL)-based approach, leveraging\na two-stage network architecture for encoding circular microphone array signals\ninto second-order Ambisonics (SOA) in multi-speaker environments. In addition,\nwe introduce: (i) a novel loss function based on spatial power maps to\nregularize inter-channel correlations of the Ambisonic signals, and (ii) a\nchannel permutation technique to resolve the ambiguity of encoding vertical\ninformation using a horizontal circular array. Evaluation on simulated speech\nand noise datasets shows that our approach consistently outperforms traditional\nsignal processing (SP) and DL-based methods, providing significantly better\ntimbral and spatial quality and higher source localization accuracy. Binaural\naudio demos with visualizations are available at\nhttps://bridgoon97.github.io/NeuralAmbisonicEncoding/.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":"75 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06954","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Spatial audio formats like Ambisonics are playback device layout-agnostic and well-suited for applications such as teleconferencing and virtual reality. Conventional Ambisonic encoding methods often rely on spherical microphone arrays for efficient sound field capture, which limits their flexibility in practical scenarios. We propose a deep learning (DL)-based approach, leveraging a two-stage network architecture for encoding circular microphone array signals into second-order Ambisonics (SOA) in multi-speaker environments. In addition, we introduce: (i) a novel loss function based on spatial power maps to regularize inter-channel correlations of the Ambisonic signals, and (ii) a channel permutation technique to resolve the ambiguity of encoding vertical information using a horizontal circular array. Evaluation on simulated speech and noise datasets shows that our approach consistently outperforms traditional signal processing (SP) and DL-based methods, providing significantly better timbral and spatial quality and higher source localization accuracy. Binaural audio demos with visualizations are available at https://bridgoon97.github.io/NeuralAmbisonicEncoding/.
使用圆形麦克风阵列为多扬声器场景进行神经 Ambisonic 编码
Ambisonics 等空间音频格式与播放设备的布局无关,非常适合远程会议和虚拟现实等应用。传统的 Ambisonic 编码方法通常依赖球形麦克风来实现高效的声场捕捉,这限制了它们在实际应用场景中的灵活性。我们提出了一种基于深度学习(DL)的方法,利用两级网络架构将圆形麦克风阵列信号编码为多扬声器环境中的二阶 Ambisonics(SOA)。此外,我们还引入了:(i) 基于空间功率图的新型损失函数,以规范 Ambisonic 信号的信道间相关性;(ii) 信道置换技术,以解决使用水平圆形阵列编码垂直信息的模糊性问题。在模拟语音和噪声数据集上进行的评估表明,我们的方法始终优于传统的信号处理(SP)和基于 DL 的方法,提供了明显更好的音质和空间质量,以及更高的声源定位精度。带有可视化效果的双耳音频演示可在https://bridgoon97.github.io/NeuralAmbisonicEncoding/。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信