Takin:高质量零镜头语音生成模型群

EverestAI, :, Sijin Chen, Yuan Feng, Laipeng He, Tianwei He, Wendi He, Yanni Hu, Bin Lin, Yiting Lin, Pengfei Tan, Chengwei Tian, Chen Wang, Zhicheng Wang, Ruoye Xie, Jingjing Yin, Jianhao Ye, Jixun Yao, Quanlei Yan, Yuguang Yang
{"title":"Takin:高质量零镜头语音生成模型群","authors":"EverestAI, :, Sijin Chen, Yuan Feng, Laipeng He, Tianwei He, Wendi He, Yanni Hu, Bin Lin, Yiting Lin, Pengfei Tan, Chengwei Tian, Chen Wang, Zhicheng Wang, Ruoye Xie, Jingjing Yin, Jianhao Ye, Jixun Yao, Quanlei Yan, Yuguang Yang","doi":"arxiv-2409.12139","DOIUrl":null,"url":null,"abstract":"With the advent of the big data and large language model era, zero-shot\npersonalized rapid customization has emerged as a significant trend. In this\nreport, we introduce Takin AudioLLM, a series of techniques and models, mainly\nincluding Takin TTS, Takin VC, and Takin Morphing, specifically designed for\naudiobook production. These models are capable of zero-shot speech production,\ngenerating high-quality speech that is nearly indistinguishable from real human\nspeech and facilitating individuals to customize the speech content according\nto their own needs. Specifically, we first introduce Takin TTS, a neural codec\nlanguage model that builds upon an enhanced neural speech codec and a\nmulti-task training framework, capable of generating high-fidelity natural\nspeech in a zero-shot way. For Takin VC, we advocate an effective content and\ntimbre joint modeling approach to improve the speaker similarity, while\nadvocating for a conditional flow matching based decoder to further enhance its\nnaturalness and expressiveness. Last, we propose the Takin Morphing system with\nhighly decoupled and advanced timbre and prosody modeling approaches, which\nenables individuals to customize speech production with their preferred timbre\nand prosody in a precise and controllable manner. Extensive experiments\nvalidate the effectiveness and robustness of our Takin AudioLLM series models.\nFor detailed demos, please refer to https://takinaudiollm.github.io.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Takin: A Cohort of Superior Quality Zero-shot Speech Generation Models\",\"authors\":\"EverestAI, :, Sijin Chen, Yuan Feng, Laipeng He, Tianwei He, Wendi He, Yanni Hu, Bin Lin, Yiting Lin, Pengfei Tan, Chengwei Tian, Chen Wang, Zhicheng Wang, Ruoye Xie, Jingjing Yin, Jianhao Ye, Jixun Yao, Quanlei Yan, Yuguang Yang\",\"doi\":\"arxiv-2409.12139\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the advent of the big data and large language model era, zero-shot\\npersonalized rapid customization has emerged as a significant trend. In this\\nreport, we introduce Takin AudioLLM, a series of techniques and models, mainly\\nincluding Takin TTS, Takin VC, and Takin Morphing, specifically designed for\\naudiobook production. These models are capable of zero-shot speech production,\\ngenerating high-quality speech that is nearly indistinguishable from real human\\nspeech and facilitating individuals to customize the speech content according\\nto their own needs. Specifically, we first introduce Takin TTS, a neural codec\\nlanguage model that builds upon an enhanced neural speech codec and a\\nmulti-task training framework, capable of generating high-fidelity natural\\nspeech in a zero-shot way. For Takin VC, we advocate an effective content and\\ntimbre joint modeling approach to improve the speaker similarity, while\\nadvocating for a conditional flow matching based decoder to further enhance its\\nnaturalness and expressiveness. Last, we propose the Takin Morphing system with\\nhighly decoupled and advanced timbre and prosody modeling approaches, which\\nenables individuals to customize speech production with their preferred timbre\\nand prosody in a precise and controllable manner. Extensive experiments\\nvalidate the effectiveness and robustness of our Takin AudioLLM series models.\\nFor detailed demos, please refer to https://takinaudiollm.github.io.\",\"PeriodicalId\":501284,\"journal\":{\"name\":\"arXiv - EE - Audio and Speech Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Audio and Speech Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.12139\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12139","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着大数据和大语言模型时代的到来,零镜头个性化快速定制已成为一个重要趋势。在本报告中,我们介绍了专为有声读物制作而设计的一系列技术和模型,主要包括 Takin TTS、Takin VC 和 Takin Morphing。这些模型能够实现零镜头语音制作,生成与真人语音几乎无异的高质量语音,并方便个人根据自己的需要定制语音内容。具体来说,我们首先介绍了 Takin TTS,它是一种神经编解码语言模型,建立在增强型神经语音编解码器和多任务训练框架之上,能够以 "零镜头 "方式生成高保真自然语音。对于 Takin VC,我们主张采用有效的内容和音调联合建模方法来提高说话者的相似性,同时主张采用基于条件流匹配的解码器来进一步提高其自然度和表现力。最后,我们提出了 Takin Morphing 系统,该系统采用了高度解耦的先进音色和前音建模方法,可以让个人以精确可控的方式定制自己喜欢的音色和前音。大量实验验证了我们的 Takin AudioLLM 系列模型的有效性和稳健性。详细演示请参阅 https://takinaudiollm.github.io。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Takin: A Cohort of Superior Quality Zero-shot Speech Generation Models
With the advent of the big data and large language model era, zero-shot personalized rapid customization has emerged as a significant trend. In this report, we introduce Takin AudioLLM, a series of techniques and models, mainly including Takin TTS, Takin VC, and Takin Morphing, specifically designed for audiobook production. These models are capable of zero-shot speech production, generating high-quality speech that is nearly indistinguishable from real human speech and facilitating individuals to customize the speech content according to their own needs. Specifically, we first introduce Takin TTS, a neural codec language model that builds upon an enhanced neural speech codec and a multi-task training framework, capable of generating high-fidelity natural speech in a zero-shot way. For Takin VC, we advocate an effective content and timbre joint modeling approach to improve the speaker similarity, while advocating for a conditional flow matching based decoder to further enhance its naturalness and expressiveness. Last, we propose the Takin Morphing system with highly decoupled and advanced timbre and prosody modeling approaches, which enables individuals to customize speech production with their preferred timbre and prosody in a precise and controllable manner. Extensive experiments validate the effectiveness and robustness of our Takin AudioLLM series models. For detailed demos, please refer to https://takinaudiollm.github.io.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信