Zhihao Du, Qian Chen, Shiliang Zhang, Kai Hu, Heng Lu, Yexin Yang, Hangrui Hu, Siqi Zheng, Yue Gu, Ziyang Ma, Zhijie Yan
{"title":"CosyVoice:基于有监督语义标记的可扩展多语言零镜头文本到语音合成器","authors":"Zhihao Du, Qian Chen, Shiliang Zhang, Kai Hu, Heng Lu, Yexin Yang, Hangrui Hu, Siqi Zheng, Yue Gu, Ziyang Ma, Zhijie Yan","doi":"arxiv-2407.05407","DOIUrl":null,"url":null,"abstract":"Recent years have witnessed a trend that large language model (LLM) based\ntext-to-speech (TTS) emerges into the mainstream due to their high naturalness\nand zero-shot capacity. In this paradigm, speech signals are discretized into\ntoken sequences, which are modeled by an LLM with text as prompts and\nreconstructed by a token-based vocoder to waveforms. Obviously, speech tokens\nplay a critical role in LLM-based TTS models. Current speech tokens are learned\nin an unsupervised manner, which lacks explicit semantic information and\nalignment to the text. In this paper, we propose to represent speech with\nsupervised semantic tokens, which are derived from a multilingual speech\nrecognition model by inserting vector quantization into the encoder. Based on\nthe tokens, we further propose a scalable zero-shot TTS synthesizer, CosyVoice,\nwhich consists of an LLM for text-to-token generation and a conditional flow\nmatching model for token-to-speech synthesis. Experimental results show that\nsupervised semantic tokens significantly outperform existing unsupervised\ntokens in terms of content consistency and speaker similarity for zero-shot\nvoice cloning. Moreover, we find that utilizing large-scale data further\nimproves the synthesis performance, indicating the scalable capacity of\nCosyVoice. To the best of our knowledge, this is the first attempt to involve\nsupervised speech tokens into TTS models.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CosyVoice: A Scalable Multilingual Zero-shot Text-to-speech Synthesizer based on Supervised Semantic Tokens\",\"authors\":\"Zhihao Du, Qian Chen, Shiliang Zhang, Kai Hu, Heng Lu, Yexin Yang, Hangrui Hu, Siqi Zheng, Yue Gu, Ziyang Ma, Zhijie Yan\",\"doi\":\"arxiv-2407.05407\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent years have witnessed a trend that large language model (LLM) based\\ntext-to-speech (TTS) emerges into the mainstream due to their high naturalness\\nand zero-shot capacity. In this paradigm, speech signals are discretized into\\ntoken sequences, which are modeled by an LLM with text as prompts and\\nreconstructed by a token-based vocoder to waveforms. Obviously, speech tokens\\nplay a critical role in LLM-based TTS models. Current speech tokens are learned\\nin an unsupervised manner, which lacks explicit semantic information and\\nalignment to the text. In this paper, we propose to represent speech with\\nsupervised semantic tokens, which are derived from a multilingual speech\\nrecognition model by inserting vector quantization into the encoder. Based on\\nthe tokens, we further propose a scalable zero-shot TTS synthesizer, CosyVoice,\\nwhich consists of an LLM for text-to-token generation and a conditional flow\\nmatching model for token-to-speech synthesis. Experimental results show that\\nsupervised semantic tokens significantly outperform existing unsupervised\\ntokens in terms of content consistency and speaker similarity for zero-shot\\nvoice cloning. Moreover, we find that utilizing large-scale data further\\nimproves the synthesis performance, indicating the scalable capacity of\\nCosyVoice. To the best of our knowledge, this is the first attempt to involve\\nsupervised speech tokens into TTS models.\",\"PeriodicalId\":501178,\"journal\":{\"name\":\"arXiv - CS - Sound\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Sound\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.05407\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.05407","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CosyVoice: A Scalable Multilingual Zero-shot Text-to-speech Synthesizer based on Supervised Semantic Tokens
Recent years have witnessed a trend that large language model (LLM) based
text-to-speech (TTS) emerges into the mainstream due to their high naturalness
and zero-shot capacity. In this paradigm, speech signals are discretized into
token sequences, which are modeled by an LLM with text as prompts and
reconstructed by a token-based vocoder to waveforms. Obviously, speech tokens
play a critical role in LLM-based TTS models. Current speech tokens are learned
in an unsupervised manner, which lacks explicit semantic information and
alignment to the text. In this paper, we propose to represent speech with
supervised semantic tokens, which are derived from a multilingual speech
recognition model by inserting vector quantization into the encoder. Based on
the tokens, we further propose a scalable zero-shot TTS synthesizer, CosyVoice,
which consists of an LLM for text-to-token generation and a conditional flow
matching model for token-to-speech synthesis. Experimental results show that
supervised semantic tokens significantly outperform existing unsupervised
tokens in terms of content consistency and speaker similarity for zero-shot
voice cloning. Moreover, we find that utilizing large-scale data further
improves the synthesis performance, indicating the scalable capacity of
CosyVoice. To the best of our knowledge, this is the first attempt to involve
supervised speech tokens into TTS models.