L. Sbattella, Luca Colombo, Carlo Rinaldi, Roberto Tedesco, M. Matteucci, Alessandro Trivilini
{"title":"从声音信号中提取情感和交流风格","authors":"L. Sbattella, Luca Colombo, Carlo Rinaldi, Roberto Tedesco, M. Matteucci, Alessandro Trivilini","doi":"10.5220/0004699301830195","DOIUrl":null,"url":null,"abstract":"Many psychological and social studies highlighted the two distinct channels we use to exchange information among us—an explicit, linguistic channel, and an implicit, paralinguistic channel. The latter contains information about the emotional state of the speaker, providing clues about the implicit meaning of the message. In particular, the paralinguistic channel can improve applications requiring human-machine interactions (for example, Automatic Speech Recognition systems or Conversational Agents), as well as support the analysis of human-human interactions (think, for example, of clinic or forensic applications). In this work we present PrEmA, a tool able to recognize and classify both emotions and communication style of the speaker, relying on prosodic features. In particular, communication-style recognition is, to our knowledge, new, and could be used to infer interesting clues about the state of the interaction. We selected two sets of prosodic features, and trained two classifiers, based on the Linear Discriminant Analysis. The experiments we conducted, with Italian speakers, provided encouraging results (Ac=71% for classification of emotions, Ac=86% for classification of communication styles), showing that the models were able to discriminate among emotions and communication styles, associating phrases with the correct labels.","PeriodicalId":326453,"journal":{"name":"International Conference on Physiological Computing Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Extracting Emotions and Communication Styles from Vocal Signals\",\"authors\":\"L. Sbattella, Luca Colombo, Carlo Rinaldi, Roberto Tedesco, M. Matteucci, Alessandro Trivilini\",\"doi\":\"10.5220/0004699301830195\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many psychological and social studies highlighted the two distinct channels we use to exchange information among us—an explicit, linguistic channel, and an implicit, paralinguistic channel. The latter contains information about the emotional state of the speaker, providing clues about the implicit meaning of the message. In particular, the paralinguistic channel can improve applications requiring human-machine interactions (for example, Automatic Speech Recognition systems or Conversational Agents), as well as support the analysis of human-human interactions (think, for example, of clinic or forensic applications). In this work we present PrEmA, a tool able to recognize and classify both emotions and communication style of the speaker, relying on prosodic features. In particular, communication-style recognition is, to our knowledge, new, and could be used to infer interesting clues about the state of the interaction. We selected two sets of prosodic features, and trained two classifiers, based on the Linear Discriminant Analysis. The experiments we conducted, with Italian speakers, provided encouraging results (Ac=71% for classification of emotions, Ac=86% for classification of communication styles), showing that the models were able to discriminate among emotions and communication styles, associating phrases with the correct labels.\",\"PeriodicalId\":326453,\"journal\":{\"name\":\"International Conference on Physiological Computing Systems\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Physiological Computing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5220/0004699301830195\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Physiological Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0004699301830195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Extracting Emotions and Communication Styles from Vocal Signals
Many psychological and social studies highlighted the two distinct channels we use to exchange information among us—an explicit, linguistic channel, and an implicit, paralinguistic channel. The latter contains information about the emotional state of the speaker, providing clues about the implicit meaning of the message. In particular, the paralinguistic channel can improve applications requiring human-machine interactions (for example, Automatic Speech Recognition systems or Conversational Agents), as well as support the analysis of human-human interactions (think, for example, of clinic or forensic applications). In this work we present PrEmA, a tool able to recognize and classify both emotions and communication style of the speaker, relying on prosodic features. In particular, communication-style recognition is, to our knowledge, new, and could be used to infer interesting clues about the state of the interaction. We selected two sets of prosodic features, and trained two classifiers, based on the Linear Discriminant Analysis. The experiments we conducted, with Italian speakers, provided encouraging results (Ac=71% for classification of emotions, Ac=86% for classification of communication styles), showing that the models were able to discriminate among emotions and communication styles, associating phrases with the correct labels.