{"title":"Time-frequency approach for analysis and synthesis of particular emotional voice","authors":"M. Kobayashi, S. Wada","doi":"10.1109/ISSPIT.2005.1577069","DOIUrl":null,"url":null,"abstract":"In this paper, the analysis and synthesis methods of particular emotional voice for man-machine natural interface is developed. First, the emotional voice (neutral, anger, sadness, joy, dislike) is analyzed using time-frequency representation of speech and similarity analysis. Then, based on the result of emotional analysis, a voice with neutral emotion is transformed to synthesize the particular emotional voice using time-frequency modifications. In the simulations, five types of emotion are analyzed using 50 samples of speech signals. The satisfactory average discrimination rate is achieved in the similarity analysis. Further, the synthesized emotional voice is subjectively evaluated. It is confirmed that the emotional voice is naturally generated by the proposed time-frequency based approach","PeriodicalId":421826,"journal":{"name":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSPIT.2005.1577069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In this paper, the analysis and synthesis methods of particular emotional voice for man-machine natural interface is developed. First, the emotional voice (neutral, anger, sadness, joy, dislike) is analyzed using time-frequency representation of speech and similarity analysis. Then, based on the result of emotional analysis, a voice with neutral emotion is transformed to synthesize the particular emotional voice using time-frequency modifications. In the simulations, five types of emotion are analyzed using 50 samples of speech signals. The satisfactory average discrimination rate is achieved in the similarity analysis. Further, the synthesized emotional voice is subjectively evaluated. It is confirmed that the emotional voice is naturally generated by the proposed time-frequency based approach