Exploring Personal Memories and Video Content as Context for Facial Behavior in Predictions of Video-Induced Emotions

Bernd Dudzik, J. Broekens, Mark Antonius Neerincx, H. Hung
{"title":"Exploring Personal Memories and Video Content as Context for Facial Behavior in Predictions of Video-Induced Emotions","authors":"Bernd Dudzik, J. Broekens, Mark Antonius Neerincx, H. Hung","doi":"10.1145/3382507.3418814","DOIUrl":null,"url":null,"abstract":"Empirical evidence suggests that the emotional meaning of facial behavior in isolation is often ambiguous in real-world conditions. While humans complement interpretations of others' faces with additional reasoning about context, automated approaches rarely display such context-sensitivity. Empirical findings indicate that the personal memories triggered by videos are crucial for predicting viewers' emotional response to such videos ?- in some cases, even more so than the video's audiovisual content. In this article, we explore the benefits of personal memories as context for facial behavior analysis. We conduct a series of multimodal machine learning experiments combining the automatic analysis of video-viewers' faces with that of two types of context information for affective predictions: \\beginenumerate* [label=(\\arabic*)] \\item self-reported free-text descriptions of triggered memories and \\item a video's audiovisual content \\endenumerate*. Our results demonstrate that both sources of context provide models with information about variation in viewers' affective responses that complement facial analysis and each other.","PeriodicalId":402394,"journal":{"name":"Proceedings of the 2020 International Conference on Multimodal Interaction","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3382507.3418814","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Empirical evidence suggests that the emotional meaning of facial behavior in isolation is often ambiguous in real-world conditions. While humans complement interpretations of others' faces with additional reasoning about context, automated approaches rarely display such context-sensitivity. Empirical findings indicate that the personal memories triggered by videos are crucial for predicting viewers' emotional response to such videos ?- in some cases, even more so than the video's audiovisual content. In this article, we explore the benefits of personal memories as context for facial behavior analysis. We conduct a series of multimodal machine learning experiments combining the automatic analysis of video-viewers' faces with that of two types of context information for affective predictions: \beginenumerate* [label=(\arabic*)] \item self-reported free-text descriptions of triggered memories and \item a video's audiovisual content \endenumerate*. Our results demonstrate that both sources of context provide models with information about variation in viewers' affective responses that complement facial analysis and each other.
探索个人记忆和视频内容作为视频诱发情绪预测中面部行为的背景
经验证据表明,孤立的面部行为的情感含义在现实世界中往往是模糊的。虽然人类会通过额外的上下文推理来补充对他人面部的解释,但自动化方法很少显示出这种上下文敏感性。实证研究结果表明,视频引发的个人记忆对于预测观众对此类视频的情绪反应至关重要——在某些情况下,甚至比视频的视听内容更重要。在这篇文章中,我们探讨了个人记忆作为面部行为分析背景的好处。我们进行了一系列多模态机器学习实验,将视频观众面部的自动分析与两种类型的上下文信息相结合,用于情感预测:\beginenumerate* [label=(\arabic*)] \item自我报告的触发记忆的自由文本描述和\item视频的视听内容\endenumerate*。我们的研究结果表明,上下文的两种来源都为模型提供了关于观众情感反应变化的信息,这些信息与面部分析互为补充。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信