Deep learning-based speaking rate-dependent hierarchical prosodie model for Mandarin TTS

Yen-Ting Lin, Chen-Yu Chiang
{"title":"Deep learning-based speaking rate-dependent hierarchical prosodie model for Mandarin TTS","authors":"Yen-Ting Lin, Chen-Yu Chiang","doi":"10.1109/APSIPA.2017.8282228","DOIUrl":null,"url":null,"abstract":"Speaking Rate-dependent Hierarchical Prosodie Model (SR-HPM) is a syllable-based statistical prosodie model and has been successfully served as a prosody generation model in a speaking rate-controlled text-to-speech system for Mandarin, and two Chinese dialects: Taiwan Min and Si-Xian Hakka. Excited by the success of utilizing deep learning (DL) techniques in parametric speech synthesis based on the HMM-based speech synthesis system, this study aims to improve the performance of the SR-HPM in prosody generation by replacing the conventional cascaded statistical sub-models with DL-based models, i.e. the DL-based SR-HPM. Each of the sub-model is first independently realized by a specially designed DL-based model based on its input-output characteristics. Then, all sub-models are cascaded and unified as one deep neural structure with their parameters being obtained by an end-to-end (linguistic feature-to-prosodic acoustic feature) optimization manner. The subjective and objective tests show that the DL-based SR-HPM performs better than the conventional statistical SR-HPM in prosody generation.","PeriodicalId":142091,"journal":{"name":"2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APSIPA.2017.8282228","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Speaking Rate-dependent Hierarchical Prosodie Model (SR-HPM) is a syllable-based statistical prosodie model and has been successfully served as a prosody generation model in a speaking rate-controlled text-to-speech system for Mandarin, and two Chinese dialects: Taiwan Min and Si-Xian Hakka. Excited by the success of utilizing deep learning (DL) techniques in parametric speech synthesis based on the HMM-based speech synthesis system, this study aims to improve the performance of the SR-HPM in prosody generation by replacing the conventional cascaded statistical sub-models with DL-based models, i.e. the DL-based SR-HPM. Each of the sub-model is first independently realized by a specially designed DL-based model based on its input-output characteristics. Then, all sub-models are cascaded and unified as one deep neural structure with their parameters being obtained by an end-to-end (linguistic feature-to-prosodic acoustic feature) optimization manner. The subjective and objective tests show that the DL-based SR-HPM performs better than the conventional statistical SR-HPM in prosody generation.
基于深度学习的汉语TTS语速分层韵律模型
基于语速的分层韵律模型(SR-HPM)是一种基于音节的韵律统计模型,已成功应用于普通话、台湾闽话和泗县客家方言的语速控制文本-语音系统中。基于深度学习技术在基于hmm的参数化语音合成系统中的成功应用,本研究旨在用基于DL的模型(即基于DL的SR-HPM)取代传统的级联统计子模型,从而提高SR-HPM在韵律生成方面的性能。每个子模型首先由一个专门设计的基于dl的模型根据其输入输出特性独立实现。然后,将所有子模型级联统一为一个深度神经结构,并通过端到端(语言特征到韵律声学特征)优化方式获得其参数。主观和客观测试表明,基于dl的SR-HPM在韵律生成方面优于传统的统计SR-HPM。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信