{"title":"基于个性化端到端语音合成的多尺度控制丰富风格迁移","authors":"Zhongcai Lyu, Jie Zhu","doi":"10.1109/ICIST55546.2022.9926908","DOIUrl":null,"url":null,"abstract":"Personalized speech synthesis aims to transfer speech style with a few speech samples from the target speaker. However, pretrain and fine-tuning techniques are required to overcome the problem of poor performance for similarity and prosody in a data-limited condition. In this paper, a zero-shot style transfer framework based on multi-scale control is presented to handle the above problems. Firstly, speaker embedding is extracted from a single reference speech audio by a specially designed reference encoder, with which Speaker-Adaptive Linear Modulation (SALM) could generate the scale and bias vector to influence the encoder output, and consequently greatly enhance the adaptability to unseen speakers. Secondly, we propose a prosody module that includes a prosody extractor and prosody predictor, which can efficiently predict the prosody of the generated speech from the reference audio and text information and achieve phoneme-level prosody control, thus increasing the diversity of the synthesized speech. Using both objective and subjective metrics for evaluation, the experiments demonstrate that our model is capable of synthesizing speech of high naturalness and similarity of speech, with only a few or even a single piece of data from the target speaker.","PeriodicalId":211213,"journal":{"name":"2022 12th International Conference on Information Science and Technology (ICIST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enriching Style Transfer in multi-scale control based personalized end-to-end speech synthesis\",\"authors\":\"Zhongcai Lyu, Jie Zhu\",\"doi\":\"10.1109/ICIST55546.2022.9926908\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Personalized speech synthesis aims to transfer speech style with a few speech samples from the target speaker. However, pretrain and fine-tuning techniques are required to overcome the problem of poor performance for similarity and prosody in a data-limited condition. In this paper, a zero-shot style transfer framework based on multi-scale control is presented to handle the above problems. Firstly, speaker embedding is extracted from a single reference speech audio by a specially designed reference encoder, with which Speaker-Adaptive Linear Modulation (SALM) could generate the scale and bias vector to influence the encoder output, and consequently greatly enhance the adaptability to unseen speakers. Secondly, we propose a prosody module that includes a prosody extractor and prosody predictor, which can efficiently predict the prosody of the generated speech from the reference audio and text information and achieve phoneme-level prosody control, thus increasing the diversity of the synthesized speech. Using both objective and subjective metrics for evaluation, the experiments demonstrate that our model is capable of synthesizing speech of high naturalness and similarity of speech, with only a few or even a single piece of data from the target speaker.\",\"PeriodicalId\":211213,\"journal\":{\"name\":\"2022 12th International Conference on Information Science and Technology (ICIST)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 12th International Conference on Information Science and Technology (ICIST)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIST55546.2022.9926908\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 12th International Conference on Information Science and Technology (ICIST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIST55546.2022.9926908","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
个性化语音合成的目的是利用目标说话者的少量语音样本来转移语音风格。然而,需要预训练和微调技术来克服在数据有限的条件下相似性和韵律性能差的问题。针对上述问题,本文提出了一种基于多尺度控制的零弹式迁移框架。首先,通过专门设计的参考编码器从单个参考语音音频中提取扬声器嵌入,通过扬声器自适应线性调制(speaker - adaptive Linear Modulation, SALM)产生影响编码器输出的尺度和偏置向量,从而大大提高了对未知扬声器的适应性。其次,我们提出了包含韵律提取器和韵律预测器的韵律模块,该模块可以有效地从参考音频和文本信息中预测生成语音的韵律,实现音素级韵律控制,从而增加合成语音的多样性。实验结果表明,我们的模型能够在仅使用少量甚至单个目标说话人数据的情况下,合成出高度自然度和相似度的语音。
Enriching Style Transfer in multi-scale control based personalized end-to-end speech synthesis
Personalized speech synthesis aims to transfer speech style with a few speech samples from the target speaker. However, pretrain and fine-tuning techniques are required to overcome the problem of poor performance for similarity and prosody in a data-limited condition. In this paper, a zero-shot style transfer framework based on multi-scale control is presented to handle the above problems. Firstly, speaker embedding is extracted from a single reference speech audio by a specially designed reference encoder, with which Speaker-Adaptive Linear Modulation (SALM) could generate the scale and bias vector to influence the encoder output, and consequently greatly enhance the adaptability to unseen speakers. Secondly, we propose a prosody module that includes a prosody extractor and prosody predictor, which can efficiently predict the prosody of the generated speech from the reference audio and text information and achieve phoneme-level prosody control, thus increasing the diversity of the synthesized speech. Using both objective and subjective metrics for evaluation, the experiments demonstrate that our model is capable of synthesizing speech of high naturalness and similarity of speech, with only a few or even a single piece of data from the target speaker.