{"title":"Takin: A Cohort of Superior Quality Zero-shot Speech Generation Models","authors":"EverestAI, :, Sijin Chen, Yuan Feng, Laipeng He, Tianwei He, Wendi He, Yanni Hu, Bin Lin, Yiting Lin, Pengfei Tan, Chengwei Tian, Chen Wang, Zhicheng Wang, Ruoye Xie, Jingjing Yin, Jianhao Ye, Jixun Yao, Quanlei Yan, Yuguang Yang","doi":"arxiv-2409.12139","DOIUrl":null,"url":null,"abstract":"With the advent of the big data and large language model era, zero-shot\npersonalized rapid customization has emerged as a significant trend. In this\nreport, we introduce Takin AudioLLM, a series of techniques and models, mainly\nincluding Takin TTS, Takin VC, and Takin Morphing, specifically designed for\naudiobook production. These models are capable of zero-shot speech production,\ngenerating high-quality speech that is nearly indistinguishable from real human\nspeech and facilitating individuals to customize the speech content according\nto their own needs. Specifically, we first introduce Takin TTS, a neural codec\nlanguage model that builds upon an enhanced neural speech codec and a\nmulti-task training framework, capable of generating high-fidelity natural\nspeech in a zero-shot way. For Takin VC, we advocate an effective content and\ntimbre joint modeling approach to improve the speaker similarity, while\nadvocating for a conditional flow matching based decoder to further enhance its\nnaturalness and expressiveness. Last, we propose the Takin Morphing system with\nhighly decoupled and advanced timbre and prosody modeling approaches, which\nenables individuals to customize speech production with their preferred timbre\nand prosody in a precise and controllable manner. Extensive experiments\nvalidate the effectiveness and robustness of our Takin AudioLLM series models.\nFor detailed demos, please refer to https://takinaudiollm.github.io.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12139","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the advent of the big data and large language model era, zero-shot
personalized rapid customization has emerged as a significant trend. In this
report, we introduce Takin AudioLLM, a series of techniques and models, mainly
including Takin TTS, Takin VC, and Takin Morphing, specifically designed for
audiobook production. These models are capable of zero-shot speech production,
generating high-quality speech that is nearly indistinguishable from real human
speech and facilitating individuals to customize the speech content according
to their own needs. Specifically, we first introduce Takin TTS, a neural codec
language model that builds upon an enhanced neural speech codec and a
multi-task training framework, capable of generating high-fidelity natural
speech in a zero-shot way. For Takin VC, we advocate an effective content and
timbre joint modeling approach to improve the speaker similarity, while
advocating for a conditional flow matching based decoder to further enhance its
naturalness and expressiveness. Last, we propose the Takin Morphing system with
highly decoupled and advanced timbre and prosody modeling approaches, which
enables individuals to customize speech production with their preferred timbre
and prosody in a precise and controllable manner. Extensive experiments
validate the effectiveness and robustness of our Takin AudioLLM series models.
For detailed demos, please refer to https://takinaudiollm.github.io.