{"title":"Extending Multimodal Emotion Recognition with Biological Signals: Presenting a Novel Dataset and Recent Findings","authors":"Alice Baird","doi":"10.1145/3423327.3423512","DOIUrl":null,"url":null,"abstract":"Multimodal fusion has shown great promise in recent literature, particularly for audio dominant tasks. In this talk, we outline a the finding from a recently developed multimodal dataset, and discuss the promise of fusing biological signals with speech for continuous recognition of the emotional dimensions of valence and arousal in the context of public speaking. As well as this, we discuss the advantage of cross-language (German and English) analysis by training language-independent models and testing them on speech from various native and non-native groupings. For the emotion recognition task used as a case study, a Long Short-Term Memory - Recurrent Neural Network (LSTM-RNN) architecture with a self-attention layer is used.","PeriodicalId":246071,"journal":{"name":"Proceedings of the 1st International on Multimodal Sentiment Analysis in Real-life Media Challenge and Workshop","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International on Multimodal Sentiment Analysis in Real-life Media Challenge and Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3423327.3423512","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal fusion has shown great promise in recent literature, particularly for audio dominant tasks. In this talk, we outline a the finding from a recently developed multimodal dataset, and discuss the promise of fusing biological signals with speech for continuous recognition of the emotional dimensions of valence and arousal in the context of public speaking. As well as this, we discuss the advantage of cross-language (German and English) analysis by training language-independent models and testing them on speech from various native and non-native groupings. For the emotion recognition task used as a case study, a Long Short-Term Memory - Recurrent Neural Network (LSTM-RNN) architecture with a self-attention layer is used.