从声音信号中提取情感和交流风格

L. Sbattella, Luca Colombo, Carlo Rinaldi, Roberto Tedesco, M. Matteucci, Alessandro Trivilini
{"title":"从声音信号中提取情感和交流风格","authors":"L. Sbattella, Luca Colombo, Carlo Rinaldi, Roberto Tedesco, M. Matteucci, Alessandro Trivilini","doi":"10.5220/0004699301830195","DOIUrl":null,"url":null,"abstract":"Many psychological and social studies highlighted the two distinct channels we use to exchange information among us—an explicit, linguistic channel, and an implicit, paralinguistic channel. The latter contains information about the emotional state of the speaker, providing clues about the implicit meaning of the message. In particular, the paralinguistic channel can improve applications requiring human-machine interactions (for example, Automatic Speech Recognition systems or Conversational Agents), as well as support the analysis of human-human interactions (think, for example, of clinic or forensic applications). In this work we present PrEmA, a tool able to recognize and classify both emotions and communication style of the speaker, relying on prosodic features. In particular, communication-style recognition is, to our knowledge, new, and could be used to infer interesting clues about the state of the interaction. We selected two sets of prosodic features, and trained two classifiers, based on the Linear Discriminant Analysis. The experiments we conducted, with Italian speakers, provided encouraging results (Ac=71% for classification of emotions, Ac=86% for classification of communication styles), showing that the models were able to discriminate among emotions and communication styles, associating phrases with the correct labels.","PeriodicalId":326453,"journal":{"name":"International Conference on Physiological Computing Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Extracting Emotions and Communication Styles from Vocal Signals\",\"authors\":\"L. Sbattella, Luca Colombo, Carlo Rinaldi, Roberto Tedesco, M. Matteucci, Alessandro Trivilini\",\"doi\":\"10.5220/0004699301830195\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many psychological and social studies highlighted the two distinct channels we use to exchange information among us—an explicit, linguistic channel, and an implicit, paralinguistic channel. The latter contains information about the emotional state of the speaker, providing clues about the implicit meaning of the message. In particular, the paralinguistic channel can improve applications requiring human-machine interactions (for example, Automatic Speech Recognition systems or Conversational Agents), as well as support the analysis of human-human interactions (think, for example, of clinic or forensic applications). In this work we present PrEmA, a tool able to recognize and classify both emotions and communication style of the speaker, relying on prosodic features. In particular, communication-style recognition is, to our knowledge, new, and could be used to infer interesting clues about the state of the interaction. We selected two sets of prosodic features, and trained two classifiers, based on the Linear Discriminant Analysis. The experiments we conducted, with Italian speakers, provided encouraging results (Ac=71% for classification of emotions, Ac=86% for classification of communication styles), showing that the models were able to discriminate among emotions and communication styles, associating phrases with the correct labels.\",\"PeriodicalId\":326453,\"journal\":{\"name\":\"International Conference on Physiological Computing Systems\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Physiological Computing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5220/0004699301830195\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Physiological Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0004699301830195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

许多心理学和社会研究强调了我们之间用来交换信息的两种不同的渠道——显性的语言渠道和隐性的副语言渠道。后者包含了说话者情绪状态的信息,提供了信息隐含意义的线索。特别是,副语言通道可以改进需要人机交互的应用程序(例如,自动语音识别系统或会话代理),以及支持对人机交互的分析(例如,考虑临床或法医应用程序)。在这项工作中,我们提出了PrEmA,一个能够识别和分类说话人的情绪和沟通风格的工具,依赖于韵律特征。特别是,据我们所知,沟通风格的识别是新的,可以用来推断有关交互状态的有趣线索。我们选择了两组韵律特征,并基于线性判别分析训练了两个分类器。我们对说意大利语的人进行的实验提供了令人鼓舞的结果(Ac=71%的情绪分类,Ac=86%的沟通风格分类),表明模型能够区分情绪和沟通风格,并将短语与正确的标签联系起来。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Extracting Emotions and Communication Styles from Vocal Signals
Many psychological and social studies highlighted the two distinct channels we use to exchange information among us—an explicit, linguistic channel, and an implicit, paralinguistic channel. The latter contains information about the emotional state of the speaker, providing clues about the implicit meaning of the message. In particular, the paralinguistic channel can improve applications requiring human-machine interactions (for example, Automatic Speech Recognition systems or Conversational Agents), as well as support the analysis of human-human interactions (think, for example, of clinic or forensic applications). In this work we present PrEmA, a tool able to recognize and classify both emotions and communication style of the speaker, relying on prosodic features. In particular, communication-style recognition is, to our knowledge, new, and could be used to infer interesting clues about the state of the interaction. We selected two sets of prosodic features, and trained two classifiers, based on the Linear Discriminant Analysis. The experiments we conducted, with Italian speakers, provided encouraging results (Ac=71% for classification of emotions, Ac=86% for classification of communication styles), showing that the models were able to discriminate among emotions and communication styles, associating phrases with the correct labels.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信