Martin Schorradt, K. Legde, Susana Castillo, D. Cunningham
{"title":"Integration and evaluation of emotion in an articulatory speech synthesis system","authors":"Martin Schorradt, K. Legde, Susana Castillo, D. Cunningham","doi":"10.1145/2804408.2814183","DOIUrl":null,"url":null,"abstract":"We convey a tremendous amount of information vocally. In addition to the obvious exchange of semantic information, we unconsciously vary a number of acoustic properties of the speech wave to provide information about our emotions, thoughts, and intentions. [Cahn 1990] Advances in understanding of human physiology combined with increases in the computational power available in modern computers have made the simulation of the human vocal tract a realistic option for creating artificial speech. Such systems can, in principle, produce any sound that a human can make. Here we present two experiments examining the expression of emotion using prosody (i.e., speech melody) in human recordings and an articulatory speech synthesis system.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2804408.2814183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We convey a tremendous amount of information vocally. In addition to the obvious exchange of semantic information, we unconsciously vary a number of acoustic properties of the speech wave to provide information about our emotions, thoughts, and intentions. [Cahn 1990] Advances in understanding of human physiology combined with increases in the computational power available in modern computers have made the simulation of the human vocal tract a realistic option for creating artificial speech. Such systems can, in principle, produce any sound that a human can make. Here we present two experiments examining the expression of emotion using prosody (i.e., speech melody) in human recordings and an articulatory speech synthesis system.