Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020最新文献

筛选
英文 中文
The Duke Entry for 2020 Blizzard Challenge 杜克大学参加2020暴雪挑战赛
Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020 Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-5
Zexin Cai, Ming Li
{"title":"The Duke Entry for 2020 Blizzard Challenge","authors":"Zexin Cai, Ming Li","doi":"10.21437/vcc_bc.2020-5","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-5","url":null,"abstract":"This paper presents the speech synthesis system built for the 2020 Blizzard Challenge by team ‘H’. The goal of the challenge is to build a synthesizer that is able to generate high-fidelity speech with a voice that is similar to the one from the provided data. Our system mainly draws on end-to-end neural networks. Specifically, we have an encoder-decoder based prosody prediction network to insert prosodic annotations for a given sentence. We use the spectrogram predictor from Tacotron2 as the end-toend phoneme-to-spectrogram generator, followed by the neural vocoder WaveRNN to convert predicted spectrograms to audio signals. Additionally, we involve finetuning strategics to improve the TTS performance in our work. Subjective evaluation of the synthetic audios is taken regarding naturalness, similarity, and intelligibility. Samples are available online for listening. 1","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"575 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132557940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Submission from SRCB for Voice Conversion Challenge 2020 SRCB提交的2020年语音转换挑战赛
Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020 Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-18
Qiuyue Ma, Ruolan Liu, Xue Wen, Chunhui Lu, Xiao Chen
{"title":"Submission from SRCB for Voice Conversion Challenge 2020","authors":"Qiuyue Ma, Ruolan Liu, Xue Wen, Chunhui Lu, Xiao Chen","doi":"10.21437/vcc_bc.2020-18","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-18","url":null,"abstract":"This paper presents the intra-lingual and cross-lingual voice conversion system for Voice Conversion Challenge 2020(VCC 2020). Voice conversion (VC) modifies a source speaker’s speech so that the result sounds like a target speaker. This becomes particularly difficult when source and target speakers speak different languages. In this work we focus on building a voice conversion system achieving consistent improvements in accent and intelligibility evaluations. Our voice conversion system is constituted by a bilingual phoneme recognition based speech representation module, a neural network based speech generation module and a neural vocoder. More concretely, we extract general phonation from the source speakers' speeches of different languages, and improve the sound quality by optimizing the speech synthesis module and adding a noise suppression post-process module to the vocoder. This framework ensures high intelligible and high natural speech, which is very close to human quality (MOS=4.17 rank 2 in Task 1, MOS=4.13 rank 2 in Task 2).","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116821091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The Ximalaya TTS System for Blizzard Challenge 2020 暴雪挑战赛2020的喜马拉雅TTS系统
Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020 Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-10
Wendi He, Zhiba Su, Yang Sun
{"title":"The Ximalaya TTS System for Blizzard Challenge 2020","authors":"Wendi He, Zhiba Su, Yang Sun","doi":"10.21437/vcc_bc.2020-10","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-10","url":null,"abstract":"This paper describes the proposed Himalaya text-to-speech synthesis system built for the Blizzard Challenge 2020. The two tasks are to build expressive speech synthesizers based on the released 9.5-hour Mandarin corpus from a male native speaker and 3-hour Shanghainese corpus from a female native speaker respectively. Our architecture is Tacotron2-based acoustic model with WaveRNN vocoder. Several methods for preprocessing and checking the raw BC transcript are imple-mented. Firstly, the multi-task TTS front-end module trans-forms the text sequences into phoneme-level sequences with prosody label after implement the polyphonic disambiguation and prosody prediction module. Then, we train the released corpus on a Seq2seq multi-speaker acoustic model for Mel spec-trograms modeling. Besides, the neural vocoder WaveRNN[1] with minor improvements generate high-quality audio for the submitted results. The identifier for our system is M, and the experimental evaluation results in listening tests show that the system we submitted performed well in most of the criterion.","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130072009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-parallel Voice Conversion based on Hierarchical Latent Embedding Vector Quantized Variational Autoencoder 基于层次隐嵌入矢量量化变分自编码器的非并行语音转换
Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020 Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-20
Tuan Vu Ho, M. Akagi
{"title":"Non-parallel Voice Conversion based on Hierarchical Latent Embedding Vector Quantized Variational Autoencoder","authors":"Tuan Vu Ho, M. Akagi","doi":"10.21437/vcc_bc.2020-20","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-20","url":null,"abstract":"This paper proposes a hierarchical latent embedding structure for Vector Quantized Variational Autoencoder (VQVAE) to improve the performance of the non-parallel voice conversion (NPVC) model. Previous studies on NPVC based on vanilla VQVAE use a single codebook to encode the linguistic information at a fixed temporal scale. However, the linguistic structure contains different semantic levels (e.g., phoneme, sylla-ble, word) that span at various temporal scales. Therefore, the converted speech may contain unnatural pronunciations which can degrade the naturalness of speech. To tackle this problem, we propose to use the hierarchical latent embedding structure which comprises several vector quantization blocks operating at different temporal scales. When trained with a multi-speaker database, our proposed model can encode the voice characteristics into the speaker embedding vector, which can be used in one-shot learning settings. Results from objective and subjective tests indicate that our proposed model outperforms the conventional VQVAE based model in both intra-lingual and cross-lingual conversion tasks. The official results from Voice Conversion Challenge 2020 reveal that our proposed model achieved the highest naturalness performance among autoencoder based models in both tasks. Our implementation is being made available at 1 .","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132440004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
The Sogou System for Blizzard Challenge 2020 2020暴雪挑战赛的搜狗系统
Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020 Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-8
Fanbo Meng, Ruimin Wang, Peng Fang, Shuangyuan Zou, Wenjun Duan, Ming Zhou, Kai Liu, Wei Chen
{"title":"The Sogou System for Blizzard Challenge 2020","authors":"Fanbo Meng, Ruimin Wang, Peng Fang, Shuangyuan Zou, Wenjun Duan, Ming Zhou, Kai Liu, Wei Chen","doi":"10.21437/vcc_bc.2020-8","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-8","url":null,"abstract":"In this paper, we introduce the text-to-speech system from Sogou team submitted to Blizzard Challenge 2020. The goal of this year’s challenge is to build a natural Mandarin Chinese speech synthesis system from the 10-hours corpus by a native Chinese male speaker. We will discuss the major modules of the submitted system: (1) the front-end module to analyze the pronunciation and prosody of text; (2) the FastSpeech-based sequence-to-sequence acoustic model to predict acoustic features; (3) the WaveRNN based neural vocoder to reconstruct waveforms. Evaluation results provided by the challenge organizer are also discussed","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133711673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
NUS-HLT System for Blizzard Challenge 2020 暴雪挑战赛2020的NUS-HLT系统
Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020 Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-7
Yi Zhou, Xiaohai Tian, Xuehao Zhou, Mingyang Zhang, Grandee Lee, Rui Liu, Berrak, Sisman, Haizhou Li
{"title":"NUS-HLT System for Blizzard Challenge 2020","authors":"Yi Zhou, Xiaohai Tian, Xuehao Zhou, Mingyang Zhang, Grandee Lee, Rui Liu, Berrak, Sisman, Haizhou Li","doi":"10.21437/vcc_bc.2020-7","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-7","url":null,"abstract":"The paper presents the NUS-HLT text-to-speech (TTS) system for the Blizzard Challenge 2020. The challenge has two tasks: Hub task 2020-MH1 to synthesize Mandarin Chinese given 9.5 hours of speech data from a male native speaker of Mandarin; Spoke task 2020-SS1 to synthesize Shanghainese given 3 hours of speech data from a female native speaker of Shanghainese. Our submitted system combines the word embedding, which is extracted from a pre-trained language model, with the E2E TTS synthesizer to generate acoustic features from text input. WaveRNN neural vocoder and WaveNet neural vocoder are utilized to generate speech waveforms from acoustic features in MH1 and SS1 tasks, respectively. Evaluation results provided by the challenge organizers demonstrate the effectiveness of our submitted TTS system.","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131360892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The UFRJ Entry for the Voice Conversion Challenge 2020 2020年语音转换挑战赛UFRJ参赛作品
Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020 Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-29
Victor Costa, Igor M. Quintanilha, S. L. Netto, L. Biscainho
{"title":"The UFRJ Entry for the Voice Conversion Challenge 2020","authors":"Victor Costa, Igor M. Quintanilha, S. L. Netto, L. Biscainho","doi":"10.21437/vcc_bc.2020-29","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-29","url":null,"abstract":"This paper presents our system submitted to the Task 1 of the 2020 edition of the voice conversion challenge (VCC), based on CycleGAN to convert mel-spectograms and MelGAN to synthesize converted speech. CycleGAN is a GAN-based morphing network that uses a cyclic reconstruction cost to allow training with non-parallel corpora. MelGAN is a GAN based non-autoregressive neural vocoder that uses a multi-scale discriminator to efficiently capture complexities of speech signals and achieve high quality signals with extremely fast generation. In the VCC 2020 evaluation our system achieved mean opinion scores of 1.92 for English listeners and 1.81 for Japanese listeners, and averaged similarity score of 2.51 for English listeners and 2.59 for Japanese listeners. The results suggest that possi-bly the use of neural vocoders to represent converted speech is a problem that demand specific training strategies and the use of adaptation techniques.","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117172255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Submission from SCUT for Blizzard Challenge 2020 来自SCUT的暴雪挑战赛2020
Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020 Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-6
J. Zhong, Yitao Yang, S. Bu
{"title":"Submission from SCUT for Blizzard Challenge 2020","authors":"J. Zhong, Yitao Yang, S. Bu","doi":"10.21437/vcc_bc.2020-6","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-6","url":null,"abstract":"In this paper, we describe the SCUT text-to-speech synthesis system for the Blizzard Challenge 2020 and the task is to build a voice from the provided Mandarin dataset. We begin with our system architecture composed of an end-to-end structure to convert acoustic features from textual sequences and a WaveRNN vocoder to restore the waveform. Then a BERT-based prosody prediction model to specify the prosodic information of the content is introduced. The text processing module is adjusted to uniformly encode both Mandarin and English texts, then a two-stage training method is utilized to build a bilingual speech synthesis system. Meanwhile, we employ forward attention and guided attention mechanisms to accelerate the model’s convergence. Finally, the reasons for our inefficient performance presented in the evaluation results are discussed.","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127574996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The NLPR Speech Synthesis entry for Blizzard Challenge 2020 2020暴雪挑战赛NLPR语音合成参赛作品
Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020 Pub Date : 2020-10-30 DOI: 10.21437/vcc_bc.2020-12
Tao Wang, J. Tao, Ruibo Fu, Zhengqi Wen, Chunyu Qiang
{"title":"The NLPR Speech Synthesis entry for Blizzard Challenge 2020","authors":"Tao Wang, J. Tao, Ruibo Fu, Zhengqi Wen, Chunyu Qiang","doi":"10.21437/vcc_bc.2020-12","DOIUrl":"https://doi.org/10.21437/vcc_bc.2020-12","url":null,"abstract":"The paper describes the NLPR speech synthesis system entry for Blizzard Challenge 2020. More than 9 hours of speech data from an news anchor and 3 hours of speech from one native Shanghainese speaker are adopted as training data for building system this year. Our speech synthesis system is built based on the multi-speaker end-to-end speech synthesis system. LPCNet based neural vocoder is adapted to improve the quality. Different from our previous system, some improvements about data pruning and speaker adaptation strategies were made to improve the stability of our system. In this paper, the whole system structure, data pruning method, and the duration control will be in-troduced and discussed. In addition, this competition includes two tasks of Mandarin and Shanghainese, and we will intro-duce the important parts of each topic respectively. Finally, the results of listening test are presented.","PeriodicalId":355114,"journal":{"name":"Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121486017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信