Haohan Guo, Fenglong Xie, Dongchao Yang, Xixin Wu, Helen Meng
{"title":"从粗到细:通过多尺度语音编码和生成改进神经编解码器语言模型","authors":"Haohan Guo, Fenglong Xie, Dongchao Yang, Xixin Wu, Helen Meng","doi":"arxiv-2409.11630","DOIUrl":null,"url":null,"abstract":"The neural codec language model (CLM) has demonstrated remarkable performance\nin text-to-speech (TTS) synthesis. However, troubled by ``recency bias\", CLM\nlacks sufficient attention to coarse-grained information at a higher temporal\nscale, often producing unnatural or even unintelligible speech. This work\nproposes CoFi-Speech, a coarse-to-fine CLM-TTS approach, employing multi-scale\nspeech coding and generation to address this issue. We train a multi-scale\nneural codec, CoFi-Codec, to encode speech into a multi-scale discrete\nrepresentation, comprising multiple token sequences with different time\nresolutions. Then, we propose CoFi-LM that can generate this representation in\ntwo modes: the single-LM-based chain-of-scale generation and the\nmultiple-LM-based stack-of-scale generation. In experiments, CoFi-Speech\nsignificantly outperforms single-scale baseline systems on naturalness and\nspeaker similarity in zero-shot TTS. The analysis of multi-scale coding\ndemonstrates the effectiveness of CoFi-Codec in learning multi-scale discrete\nspeech representations while keeping high-quality speech reconstruction. The\ncoarse-to-fine multi-scale generation, especially for the stack-of-scale\napproach, is also validated as a crucial approach in pursuing a high-quality\nneural codec language model for TTS.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Speaking from Coarse to Fine: Improving Neural Codec Language Model via Multi-Scale Speech Coding and Generation\",\"authors\":\"Haohan Guo, Fenglong Xie, Dongchao Yang, Xixin Wu, Helen Meng\",\"doi\":\"arxiv-2409.11630\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The neural codec language model (CLM) has demonstrated remarkable performance\\nin text-to-speech (TTS) synthesis. However, troubled by ``recency bias\\\", CLM\\nlacks sufficient attention to coarse-grained information at a higher temporal\\nscale, often producing unnatural or even unintelligible speech. This work\\nproposes CoFi-Speech, a coarse-to-fine CLM-TTS approach, employing multi-scale\\nspeech coding and generation to address this issue. We train a multi-scale\\nneural codec, CoFi-Codec, to encode speech into a multi-scale discrete\\nrepresentation, comprising multiple token sequences with different time\\nresolutions. Then, we propose CoFi-LM that can generate this representation in\\ntwo modes: the single-LM-based chain-of-scale generation and the\\nmultiple-LM-based stack-of-scale generation. In experiments, CoFi-Speech\\nsignificantly outperforms single-scale baseline systems on naturalness and\\nspeaker similarity in zero-shot TTS. The analysis of multi-scale coding\\ndemonstrates the effectiveness of CoFi-Codec in learning multi-scale discrete\\nspeech representations while keeping high-quality speech reconstruction. The\\ncoarse-to-fine multi-scale generation, especially for the stack-of-scale\\napproach, is also validated as a crucial approach in pursuing a high-quality\\nneural codec language model for TTS.\",\"PeriodicalId\":501284,\"journal\":{\"name\":\"arXiv - EE - Audio and Speech Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Audio and Speech Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11630\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11630","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Speaking from Coarse to Fine: Improving Neural Codec Language Model via Multi-Scale Speech Coding and Generation
The neural codec language model (CLM) has demonstrated remarkable performance
in text-to-speech (TTS) synthesis. However, troubled by ``recency bias", CLM
lacks sufficient attention to coarse-grained information at a higher temporal
scale, often producing unnatural or even unintelligible speech. This work
proposes CoFi-Speech, a coarse-to-fine CLM-TTS approach, employing multi-scale
speech coding and generation to address this issue. We train a multi-scale
neural codec, CoFi-Codec, to encode speech into a multi-scale discrete
representation, comprising multiple token sequences with different time
resolutions. Then, we propose CoFi-LM that can generate this representation in
two modes: the single-LM-based chain-of-scale generation and the
multiple-LM-based stack-of-scale generation. In experiments, CoFi-Speech
significantly outperforms single-scale baseline systems on naturalness and
speaker similarity in zero-shot TTS. The analysis of multi-scale coding
demonstrates the effectiveness of CoFi-Codec in learning multi-scale discrete
speech representations while keeping high-quality speech reconstruction. The
coarse-to-fine multi-scale generation, especially for the stack-of-scale
approach, is also validated as a crucial approach in pursuing a high-quality
neural codec language model for TTS.