AVEC'15 Keynote Talk: From Facial Expression Analysis to Multimodal Mood Analysis

Roland Göcke
{"title":"AVEC'15 Keynote Talk: From Facial Expression Analysis to Multimodal Mood Analysis","authors":"Roland Göcke","doi":"10.1145/2808196.2808197","DOIUrl":null,"url":null,"abstract":"In this talk, I will give an overview of our research into developing multimodal technology that analyses the affective state and more broadly behaviour of humans. Such technology is useful for a number of applications, with applications in healthcare, e.g. mental health disorders, being a particular focus for us. Depression and other mood disorders are common and disabling disorders. Their impact on individuals and families is profound. The WHO Global Burden of Disease reports quantify depression as the leading cause of disability worldwide. Despite the high prevalence, current clinical practice depends almost exclusively on self-report and clinical opinion, risking a range of subjective biases. There currently exist no laboratory-based measures of illness expression, course and recovery, and no objective markers of end-points for interventions in both clinical and research settings. Using a multimodal analysis of facial expressions and movements, body posture, head movements as well as vocal expressions, we are developing affective sensing technology that supports clinicians in the diagnosis and monitoring of treatment progress. Encouraging results from a recently completed pilot study demonstrate that this approach can achieve over 90% agreement with clinical assessment. After more than eight years of research, I will also talk about the lessons learnt in this project, such as measuring spontaneous expressions of affect, subtle expressions, and affect intensity using multimodal approaches. We are currently extending this line of research to other disorders such as anxiety, post-traumatic stress disorder, dementia and autism spectrum disorders. In particular for the latter, a natural progression is to analyse dyadic and group social interactions. At the core of our research is a focus on robust approaches that can work in real-world environments.","PeriodicalId":123597,"journal":{"name":"Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2808196.2808197","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this talk, I will give an overview of our research into developing multimodal technology that analyses the affective state and more broadly behaviour of humans. Such technology is useful for a number of applications, with applications in healthcare, e.g. mental health disorders, being a particular focus for us. Depression and other mood disorders are common and disabling disorders. Their impact on individuals and families is profound. The WHO Global Burden of Disease reports quantify depression as the leading cause of disability worldwide. Despite the high prevalence, current clinical practice depends almost exclusively on self-report and clinical opinion, risking a range of subjective biases. There currently exist no laboratory-based measures of illness expression, course and recovery, and no objective markers of end-points for interventions in both clinical and research settings. Using a multimodal analysis of facial expressions and movements, body posture, head movements as well as vocal expressions, we are developing affective sensing technology that supports clinicians in the diagnosis and monitoring of treatment progress. Encouraging results from a recently completed pilot study demonstrate that this approach can achieve over 90% agreement with clinical assessment. After more than eight years of research, I will also talk about the lessons learnt in this project, such as measuring spontaneous expressions of affect, subtle expressions, and affect intensity using multimodal approaches. We are currently extending this line of research to other disorders such as anxiety, post-traumatic stress disorder, dementia and autism spectrum disorders. In particular for the latter, a natural progression is to analyse dyadic and group social interactions. At the core of our research is a focus on robust approaches that can work in real-world environments.
AVEC’15主题演讲:从面部表情分析到多模态情绪分析
在这次演讲中,我将概述我们在开发多模态技术方面的研究,该技术可以分析人类的情感状态和更广泛的行为。这种技术对许多应用都很有用,在医疗保健方面的应用,例如精神健康障碍,是我们特别关注的重点。抑郁症和其他情绪障碍是常见的致残障碍。它们对个人和家庭的影响是深远的。世界卫生组织全球疾病负担报告将抑郁症量化为全球致残的主要原因。尽管发病率很高,但目前的临床实践几乎完全依赖于自我报告和临床意见,存在一系列主观偏见的风险。目前还没有基于实验室的疾病表现、病程和恢复的测量方法,也没有临床和研究环境中干预措施终点的客观标记。通过对面部表情和动作、身体姿势、头部运动以及声音表达的多模态分析,我们正在开发情感传感技术,以支持临床医生诊断和监测治疗进展。最近完成的一项试点研究的令人鼓舞的结果表明,该方法与临床评估的一致性超过90%。经过八年多的研究,我也将谈谈在这个项目中得到的经验教训,比如使用多模态方法测量情感的自发表达、微妙表达和情感强度。我们目前正在将这一研究扩展到其他疾病,如焦虑、创伤后应激障碍、痴呆和自闭症谱系障碍。特别是对于后者,一个自然的进展是分析二元和群体的社会互动。我们研究的核心是关注能够在现实环境中工作的稳健方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信