Zheng Lian, J. Tao, Zhengqi Wen, Bin Liu, Yibin Zheng, Rongxiu Zhong
{"title":"Towards Fine-Grained Prosody Control for Voice Conversion","authors":"Zheng Lian, J. Tao, Zhengqi Wen, Bin Liu, Yibin Zheng, Rongxiu Zhong","doi":"10.1109/ISCSLP49672.2021.9362110","DOIUrl":null,"url":null,"abstract":"In a typical voice conversion system, previous works utilized various acoustic features (such as the pitch, voiced/unvoiced flag and aperiodicity) of the source speech to control the prosody of converted speech. However, prosody is related with many factors, such as the intonation, stress and rhythm. It is a challenging task to perfectly describe prosody through hand-crafted acoustic features. To address these difficulties, we propose to use prosody embeddings to describe prosody. These embeddings are learned from the source speech in an unsupervised manner. To verify the effectiveness of our proposed method, we conduct experiments on our Mandarin corpus. Experimental results show that our proposed method can improve the speech quality and speaker similarity of the converted speech. What’s more, we observe that our method can even achieve promising results in singing conditions.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCSLP49672.2021.9362110","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17
Abstract
In a typical voice conversion system, previous works utilized various acoustic features (such as the pitch, voiced/unvoiced flag and aperiodicity) of the source speech to control the prosody of converted speech. However, prosody is related with many factors, such as the intonation, stress and rhythm. It is a challenging task to perfectly describe prosody through hand-crafted acoustic features. To address these difficulties, we propose to use prosody embeddings to describe prosody. These embeddings are learned from the source speech in an unsupervised manner. To verify the effectiveness of our proposed method, we conduct experiments on our Mandarin corpus. Experimental results show that our proposed method can improve the speech quality and speaker similarity of the converted speech. What’s more, we observe that our method can even achieve promising results in singing conditions.