Fine Tuning and Comparing Tacotron 2, Deep Voice 3, and FastSpeech 2 TTS Models in a Low Resource Environment

T. Gopalakrishnan, Syed Ayaz Imam, Archit Aggarwal
{"title":"Fine Tuning and Comparing Tacotron 2, Deep Voice 3, and FastSpeech 2 TTS Models in a Low Resource Environment","authors":"T. Gopalakrishnan, Syed Ayaz Imam, Archit Aggarwal","doi":"10.1109/ICDSIS55133.2022.9915932","DOIUrl":null,"url":null,"abstract":"Text-to-speech (TTS) models are used to generate speech from a sequence of characters provided as input. Existing TTS systems require a high-quality large dataset and vast computational resources for training. However, most of the publicly available datasets do not meet such standards, and access to powerful GPUs may not always be possible. Hence, in our work, we have successfully trained and compared TTS models, specifically Tacotron 2, FastSpeech 2, and Deep Voice 3 on a Tesla T4 GPU using a subset of the LJSpeechl.1 dataset. Subsequently, we have surveyed to analyze the performance of the models when trained on small datasets, and we discovered that the Tacotron 2 TTS model synthesized the most realistic sounding speeches. The survey revealed that the Tacotron 2 TTS model achieved a mean opinion score (MOS) at a 95% confidence interval of 4.25± 0.17, and sounded the most natural to our listeners when compared to the ground truth.","PeriodicalId":178360,"journal":{"name":"2022 IEEE International Conference on Data Science and Information System (ICDSIS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Data Science and Information System (ICDSIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSIS55133.2022.9915932","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Text-to-speech (TTS) models are used to generate speech from a sequence of characters provided as input. Existing TTS systems require a high-quality large dataset and vast computational resources for training. However, most of the publicly available datasets do not meet such standards, and access to powerful GPUs may not always be possible. Hence, in our work, we have successfully trained and compared TTS models, specifically Tacotron 2, FastSpeech 2, and Deep Voice 3 on a Tesla T4 GPU using a subset of the LJSpeechl.1 dataset. Subsequently, we have surveyed to analyze the performance of the models when trained on small datasets, and we discovered that the Tacotron 2 TTS model synthesized the most realistic sounding speeches. The survey revealed that the Tacotron 2 TTS model achieved a mean opinion score (MOS) at a 95% confidence interval of 4.25± 0.17, and sounded the most natural to our listeners when compared to the ground truth.
低资源环境下Tacotron 2、Deep Voice 3和FastSpeech 2 TTS模型的微调与比较
文本到语音(TTS)模型用于从作为输入提供的字符序列生成语音。现有的TTS系统需要高质量的大数据集和大量的计算资源进行训练。然而,大多数公开可用的数据集不符合这样的标准,并且访问强大的gpu可能并不总是可能的。因此,在我们的工作中,我们已经成功地训练和比较了TTS模型,特别是Tacotron 2, FastSpeech 2和Deep Voice 3在Tesla T4 GPU上使用ljspeech的一个子集。1数据集。随后,我们调查分析了模型在小数据集上训练时的表现,我们发现Tacotron 2 TTS模型合成了最真实的声音演讲。调查显示,Tacotron 2 TTS模型的平均意见得分(MOS)在95%置信区间为4.25±0.17,与真实情况相比,听起来最自然。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信