{"title":"Emotion extractor: A methodology to implement prosody features in speech synthesis","authors":"M. Chandak, R. Dharaskar","doi":"10.1109/ICECTECH.2010.5479961","DOIUrl":null,"url":null,"abstract":"This paper presents the methodology to extract emotion from the text at real time and add the expression to the documents contents during speech synthesis. To understand the existence of emotions self assessment test was carried out on set of documents and preliminary rules were formulated for three basic emotions: Pleasure, Arousal and Dominance. These rules are used in an automated procedure that assigns emotional state values to document contents. These values are then used by speech synthesizer to add emotions to speech. The system is language independent and content free.","PeriodicalId":178300,"journal":{"name":"2010 2nd International Conference on Electronic Computer Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 2nd International Conference on Electronic Computer Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICECTECH.2010.5479961","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper presents the methodology to extract emotion from the text at real time and add the expression to the documents contents during speech synthesis. To understand the existence of emotions self assessment test was carried out on set of documents and preliminary rules were formulated for three basic emotions: Pleasure, Arousal and Dominance. These rules are used in an automated procedure that assigns emotional state values to document contents. These values are then used by speech synthesizer to add emotions to speech. The system is language independent and content free.