我知道你现在的感受,原因如下!:在野外揭开时间连续高分辨率文本影响预测的神秘面纱

Vedhas Pandit, Maximilian Schmitt, N. Cummins, Björn Schuller
{"title":"我知道你现在的感受,原因如下!:在野外揭开时间连续高分辨率文本影响预测的神秘面纱","authors":"Vedhas Pandit, Maximilian Schmitt, N. Cummins, Björn Schuller","doi":"10.1109/CBMS.2019.00096","DOIUrl":null,"url":null,"abstract":"Affective computing 'in the wild' is of huge relevance to the healthcare field, like it is for many industries today. Applications of direct relevance are patient monitoring (e.g., emotional state, depression and pain monitoring), health information mining, diagnosis and opinion mining (e.g., from medical reports and drug reviews). The prevalence of the text modality in the medical field for various reasons – e.g., privacy laws, high costs and prohibitory memory requirements for audio and video data – has made the text modality the most popular. Deviating away from traditionally a classification task at a sample-level, the promising baseline results for the Audio/Visual Emotion Challenge (AVEC) 2017 make a strong case for the suitability of text data for a 'time-continuous' affect estimation. For the very first time, we present insights into the inner workings of deep learning, 'in the wild' affect-predicting, time-continuous regression model. We compute relevance of the sparse text-based bag-of-words features (BoTW) of the AVEC 2017 challenge in estimating the three affect labels, viz. arousal, valence and liking, by using a layerwise relevance propagation method(LRP). Interestingly, the trained models are found to rely more on adjectives and adverbs such as 'schlecht', 'gut', 'genau' with positive or negative connotations, and action descriptors such as and – quite analogous to the human perception of emotion expression.","PeriodicalId":311634,"journal":{"name":"2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"I Know How you Feel Now, and Here's why!: Demystifying Time-Continuous High Resolution Text-Based Affect Predictions in the Wild\",\"authors\":\"Vedhas Pandit, Maximilian Schmitt, N. Cummins, Björn Schuller\",\"doi\":\"10.1109/CBMS.2019.00096\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Affective computing 'in the wild' is of huge relevance to the healthcare field, like it is for many industries today. Applications of direct relevance are patient monitoring (e.g., emotional state, depression and pain monitoring), health information mining, diagnosis and opinion mining (e.g., from medical reports and drug reviews). The prevalence of the text modality in the medical field for various reasons – e.g., privacy laws, high costs and prohibitory memory requirements for audio and video data – has made the text modality the most popular. Deviating away from traditionally a classification task at a sample-level, the promising baseline results for the Audio/Visual Emotion Challenge (AVEC) 2017 make a strong case for the suitability of text data for a 'time-continuous' affect estimation. For the very first time, we present insights into the inner workings of deep learning, 'in the wild' affect-predicting, time-continuous regression model. We compute relevance of the sparse text-based bag-of-words features (BoTW) of the AVEC 2017 challenge in estimating the three affect labels, viz. arousal, valence and liking, by using a layerwise relevance propagation method(LRP). Interestingly, the trained models are found to rely more on adjectives and adverbs such as 'schlecht', 'gut', 'genau' with positive or negative connotations, and action descriptors such as and – quite analogous to the human perception of emotion expression.\",\"PeriodicalId\":311634,\"journal\":{\"name\":\"2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS)\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CBMS.2019.00096\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CBMS.2019.00096","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

“野外”的情感计算与医疗保健领域有着巨大的相关性,就像今天的许多行业一样。直接相关的应用包括患者监测(例如,情绪状态、抑郁和疼痛监测)、健康信息挖掘、诊断和意见挖掘(例如,来自医疗报告和药物审查)。由于各种原因,例如隐私法、高成本和对音频和视频数据的限制性内存要求,文本模式在医疗领域的流行使文本模式成为最受欢迎的模式。与传统的样本水平分类任务不同,2017年音频/视觉情感挑战(AVEC)的基线结果为文本数据对“时间连续”影响估计的适用性提供了强有力的证据。第一次,我们提出了深入了解深度学习的内部工作原理,“在野外”影响预测,时间连续回归模型。我们使用分层相关传播方法(LRP)计算AVEC 2017挑战中基于稀疏文本的词袋特征(BoTW)的相关性,以估计三个影响标签,即唤醒,价和喜欢。有趣的是,经过训练的模型被发现更多地依赖于带有积极或消极含义的形容词和副词,如“schlecht”、“gut”、“genau”,以及和等动作描述符,这与人类对情绪表达的感知非常相似。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
I Know How you Feel Now, and Here's why!: Demystifying Time-Continuous High Resolution Text-Based Affect Predictions in the Wild
Affective computing 'in the wild' is of huge relevance to the healthcare field, like it is for many industries today. Applications of direct relevance are patient monitoring (e.g., emotional state, depression and pain monitoring), health information mining, diagnosis and opinion mining (e.g., from medical reports and drug reviews). The prevalence of the text modality in the medical field for various reasons – e.g., privacy laws, high costs and prohibitory memory requirements for audio and video data – has made the text modality the most popular. Deviating away from traditionally a classification task at a sample-level, the promising baseline results for the Audio/Visual Emotion Challenge (AVEC) 2017 make a strong case for the suitability of text data for a 'time-continuous' affect estimation. For the very first time, we present insights into the inner workings of deep learning, 'in the wild' affect-predicting, time-continuous regression model. We compute relevance of the sparse text-based bag-of-words features (BoTW) of the AVEC 2017 challenge in estimating the three affect labels, viz. arousal, valence and liking, by using a layerwise relevance propagation method(LRP). Interestingly, the trained models are found to rely more on adjectives and adverbs such as 'schlecht', 'gut', 'genau' with positive or negative connotations, and action descriptors such as and – quite analogous to the human perception of emotion expression.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信