Zhao-Ci Liu;Liping Chen;Ya-Jun Hu;Zhen-Hua Ling;Jia Pan
{"title":"PE-Wav2vec: A Prosody-Enhanced Speech Model for Self-Supervised Prosody Learning in TTS","authors":"Zhao-Ci Liu;Liping Chen;Ya-Jun Hu;Zhen-Hua Ling;Jia Pan","doi":"10.1109/TASLP.2024.3449148","DOIUrl":null,"url":null,"abstract":"This paper investigates leveraging large-scale untranscribed speech data to enhance the prosody modelling capability of \n<italic>text-to-speech</i>\n (TTS) models. On the basis of the self-supervised speech model wav2vec 2.0, \n<italic>Prosody-Enhanced wav2vec</i>\n (PE-wav2vec) is proposed by introducing prosody learning. Specifically, prosody learning is achieved by applying supervision from the \n<italic>linear predictive coding</i>\n (LPC) residual signals on the initial Transformer blocks in the wav2vec 2.0 architecture. The embedding vectors extracted with the initial Transformer blocks of the PE-wav2vec model are utilised as prosodic representations for the corresponding frames in a speech utterance. To apply the PE-wav2vec representations in TTS, an acoustic model named \n<italic>Speech Synthesis model conditioned on Self-Supervisedly Learned Prosodic Representations</i>\n (S4LPR) is designed on the basis of FastSpeech 2. The experimental results demonstrate that the proposed PE-wav2vec model can provide richer prosody descriptions of speech than the vanilla wav2vec 2.0 model can. Furthermore, the S4LPR model using PE-wav2vec representations can effectively improve the subjective naturalness and reduce the objective distortions of synthetic speech compared with baseline models.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"4199-4210"},"PeriodicalIF":4.1000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10645206/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0
Abstract
This paper investigates leveraging large-scale untranscribed speech data to enhance the prosody modelling capability of
text-to-speech
(TTS) models. On the basis of the self-supervised speech model wav2vec 2.0,
Prosody-Enhanced wav2vec
(PE-wav2vec) is proposed by introducing prosody learning. Specifically, prosody learning is achieved by applying supervision from the
linear predictive coding
(LPC) residual signals on the initial Transformer blocks in the wav2vec 2.0 architecture. The embedding vectors extracted with the initial Transformer blocks of the PE-wav2vec model are utilised as prosodic representations for the corresponding frames in a speech utterance. To apply the PE-wav2vec representations in TTS, an acoustic model named
Speech Synthesis model conditioned on Self-Supervisedly Learned Prosodic Representations
(S4LPR) is designed on the basis of FastSpeech 2. The experimental results demonstrate that the proposed PE-wav2vec model can provide richer prosody descriptions of speech than the vanilla wav2vec 2.0 model can. Furthermore, the S4LPR model using PE-wav2vec representations can effectively improve the subjective naturalness and reduce the objective distortions of synthetic speech compared with baseline models.
期刊介绍:
The IEEE/ACM Transactions on Audio, Speech, and Language Processing covers audio, speech and language processing and the sciences that support them. In audio processing: transducers, room acoustics, active sound control, human audition, analysis/synthesis/coding of music, and consumer audio. In speech processing: areas such as speech analysis, synthesis, coding, speech and speaker recognition, speech production and perception, and speech enhancement. In language processing: speech and text analysis, understanding, generation, dialog management, translation, summarization, question answering and document indexing and retrieval, as well as general language modeling.