学习神经音频编解码器中的源解缠

Xiaoyu Bie, Xubo Liu, Gaël Richard
{"title":"学习神经音频编解码器中的源解缠","authors":"Xiaoyu Bie, Xubo Liu, Gaël Richard","doi":"arxiv-2409.11228","DOIUrl":null,"url":null,"abstract":"Neural audio codecs have significantly advanced audio compression by\nefficiently converting continuous audio signals into discrete tokens. These\ncodecs preserve high-quality sound and enable sophisticated sound generation\nthrough generative models trained on these tokens. However, existing neural\ncodec models are typically trained on large, undifferentiated audio datasets,\nneglecting the essential discrepancies between sound domains like speech,\nmusic, and environmental sound effects. This oversight complicates data\nmodeling and poses additional challenges to the controllability of sound\ngeneration. To tackle these issues, we introduce the Source-Disentangled Neural\nAudio Codec (SD-Codec), a novel approach that combines audio coding and source\nseparation. By jointly learning audio resynthesis and separation, SD-Codec\nexplicitly assigns audio signals from different domains to distinct codebooks,\nsets of discrete representations. Experimental results indicate that SD-Codec\nnot only maintains competitive resynthesis quality but also, supported by the\nseparation results, demonstrates successful disentanglement of different\nsources in the latent space, thereby enhancing interpretability in audio codec\nand providing potential finer control over the audio generation process.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning Source Disentanglement in Neural Audio Codec\",\"authors\":\"Xiaoyu Bie, Xubo Liu, Gaël Richard\",\"doi\":\"arxiv-2409.11228\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural audio codecs have significantly advanced audio compression by\\nefficiently converting continuous audio signals into discrete tokens. These\\ncodecs preserve high-quality sound and enable sophisticated sound generation\\nthrough generative models trained on these tokens. However, existing neural\\ncodec models are typically trained on large, undifferentiated audio datasets,\\nneglecting the essential discrepancies between sound domains like speech,\\nmusic, and environmental sound effects. This oversight complicates data\\nmodeling and poses additional challenges to the controllability of sound\\ngeneration. To tackle these issues, we introduce the Source-Disentangled Neural\\nAudio Codec (SD-Codec), a novel approach that combines audio coding and source\\nseparation. By jointly learning audio resynthesis and separation, SD-Codec\\nexplicitly assigns audio signals from different domains to distinct codebooks,\\nsets of discrete representations. Experimental results indicate that SD-Codec\\nnot only maintains competitive resynthesis quality but also, supported by the\\nseparation results, demonstrates successful disentanglement of different\\nsources in the latent space, thereby enhancing interpretability in audio codec\\nand providing potential finer control over the audio generation process.\",\"PeriodicalId\":501284,\"journal\":{\"name\":\"arXiv - EE - Audio and Speech Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Audio and Speech Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11228\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11228","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

神经音频编解码器通过有效地将连续音频信号转换为离散标记,大大推进了音频压缩技术的发展。这些编解码器保留了高质量的声音,并通过在这些标记上训练的生成模型实现了复杂的声音生成。然而,现有的神经编解码模型通常是在大型、无差别的音频数据集上训练的,忽略了语音、音乐和环境音效等声域之间的本质区别。这种疏忽使数据建模变得复杂,并对声音生成的可控性提出了更多挑战。为了解决这些问题,我们引入了源分离神经音频编解码器(SD-Codec),这是一种将音频编码与源分离相结合的新方法。通过联合学习音频合成和分离,SD-Codece 明确地将来自不同领域的音频信号分配给不同的编码本(离散表示集)。实验结果表明,SD-Codec 不仅保持了有竞争力的合成质量,而且在这些分离结果的支持下,成功地在潜空间中分离了不同来源,从而提高了音频编解码的可解释性,并为音频生成过程提供了更精细的控制潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Learning Source Disentanglement in Neural Audio Codec
Neural audio codecs have significantly advanced audio compression by efficiently converting continuous audio signals into discrete tokens. These codecs preserve high-quality sound and enable sophisticated sound generation through generative models trained on these tokens. However, existing neural codec models are typically trained on large, undifferentiated audio datasets, neglecting the essential discrepancies between sound domains like speech, music, and environmental sound effects. This oversight complicates data modeling and poses additional challenges to the controllability of sound generation. To tackle these issues, we introduce the Source-Disentangled Neural Audio Codec (SD-Codec), a novel approach that combines audio coding and source separation. By jointly learning audio resynthesis and separation, SD-Codec explicitly assigns audio signals from different domains to distinct codebooks, sets of discrete representations. Experimental results indicate that SD-Codec not only maintains competitive resynthesis quality but also, supported by the separation results, demonstrates successful disentanglement of different sources in the latent space, thereby enhancing interpretability in audio codec and providing potential finer control over the audio generation process.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信