Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition最新文献

筛选
英文 中文
Goals, Tasks, and Bonds: Toward the Computational Assessment of Therapist Versus Client Perception of Working Alliance. 目标、任务和纽带:治疗师与客户对工作联盟认知的计算评估》。
Alexandria K Vail, Jeffrey Girard, Lauren Bylsma, Jeffrey Cohn, Jay Fournier, Holly Swartz, Louis-Philippe Morency
{"title":"Goals, Tasks, and Bonds: Toward the Computational Assessment of Therapist Versus Client Perception of Working Alliance.","authors":"Alexandria K Vail, Jeffrey Girard, Lauren Bylsma, Jeffrey Cohn, Jay Fournier, Holly Swartz, Louis-Philippe Morency","doi":"10.1109/fg52635.2021.9667021","DOIUrl":"10.1109/fg52635.2021.9667021","url":null,"abstract":"<p><p>Early client dropout is one of the most significant challenges facing psychotherapy: recent studies suggest that at least one in five clients will leave treatment prematurely. Clients may terminate therapy for various reasons, but one of the most common causes is the lack of a strong <i>working alliance</i>. The concept of working alliance captures the collaborative relationship between a client and their therapist when working toward the progress and recovery of the client seeking treatment. Unfortunately, clients are often unwilling to directly express dissatisfaction in care until they have already decided to terminate therapy. On the other side, therapists may miss subtle signs of client discontent during treatment before it is too late. In this work, we demonstrate that nonverbal behavior analysis may aid in bridging this gap. The present study focuses primarily on the head gestures of both the client and therapist, contextualized within conversational turn-taking actions between the pair during psychotherapy sessions. We identify multiple behavior patterns suggestive of an individual's perspective on the working alliance; interestingly, these patterns also differ between the client and the therapist. These patterns inform the development of predictive models for self-reported ratings of working alliance, which demonstrate significant predictive power for both client and therapist ratings. Future applications of such models may stimulate preemptive intervention to strengthen a weak working alliance, whether explicitly attempting to repair the existing alliance or establishing a more suitable client-therapist pairing, to ensure that clients encounter fewer barriers to receiving the treatment they need.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9355426/pdf/nihms-1771359.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40700885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simple and Effective Approaches for Uncertainty Prediction in Facial Action Unit Intensity Regression. 面部动作单元强度回归中不确定性预测的简单有效方法。
Torsten Wörtwein, Louis-Philippe Morency
{"title":"Simple and Effective Approaches for Uncertainty Prediction in Facial Action Unit Intensity Regression.","authors":"Torsten Wörtwein,&nbsp;Louis-Philippe Morency","doi":"10.1109/fg47880.2020.00045","DOIUrl":"https://doi.org/10.1109/fg47880.2020.00045","url":null,"abstract":"<p><p>Knowing how much to trust a prediction is important for many critical applications. We describe two simple approaches to estimate uncertainty in regression prediction tasks and compare their performance and complexity against popular approaches. We operationalize uncertainty in regression as the absolute error between a model's prediction and the ground truth. Our two proposed approaches use a secondary model to predict the uncertainty of a primary predictive model. Our first approach leverages the assumption that similar observations are likely to have similar uncertainty and predicts uncertainty with a non-parametric method. Our second approach trains a secondary model to directly predict the uncertainty of the primary predictive model. Both approaches outperform other established uncertainty estimation approaches on the MNIST, DISFA, and BP4D+ datasets. Furthermore, we observe that approaches that directly predict the uncertainty generally perform better than approaches that indirectly estimate uncertainty.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/fg47880.2020.00045","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25453101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Nonverbal Behavioral Patterns Predict Social Rejection Elicited Aggression. 非语言行为模式预测社会排斥引发的攻击。
Megan Quarmley, Zhibo Yang, Shahrukh Athar, Gregory Zelinksy, Dimitris Samaras, Johanna M Jarcho
{"title":"Nonverbal Behavioral Patterns Predict Social Rejection Elicited Aggression.","authors":"Megan Quarmley,&nbsp;Zhibo Yang,&nbsp;Shahrukh Athar,&nbsp;Gregory Zelinksy,&nbsp;Dimitris Samaras,&nbsp;Johanna M Jarcho","doi":"10.1109/fg47880.2020.00111","DOIUrl":"https://doi.org/10.1109/fg47880.2020.00111","url":null,"abstract":"<p><p>Peer-based aggression following social rejection is a costly and prevalent problem for which existing treatments have had little success. This may be because aggression is a complex process influenced by current states of attention and arousal, which are difficult to measure on a moment to moment basis via self report. It is therefore crucial to identify nonverbal behavioral indices of attention and arousal that predict subsequent aggression. We used Support Vector Machines (SVMs) and eye gaze duration and pupillary response features, measured during positive and negative peer-based social interactions, to predict subsequent aggressive behavior towards those same peers. We found that eye gaze and pupillary reactivity not only predicted aggressive behavior, but performed better than models that included information about the participant's exposure to harsh parenting or trait aggression. Eye gaze and pupillary reactivity models also performed equally as well as those that included information about peer reputation (e.g. whether the peer was rejecting or accepting). This is the first study to decode nonverbal eye behavior during social interaction to predict social rejection-elicited aggression.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/fg47880.2020.00111","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39774870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Depression Severity by Interpretable Representations of Motion Dynamics. 通过可解释的运动动态表示检测抑郁症严重程度
Anis Kacem, Zakia Hammal, Mohamed Daoudi, Jeffrey Cohn
{"title":"Detecting Depression Severity by Interpretable Representations of Motion Dynamics.","authors":"Anis Kacem, Zakia Hammal, Mohamed Daoudi, Jeffrey Cohn","doi":"10.1109/FG.2018.00116","DOIUrl":"10.1109/FG.2018.00116","url":null,"abstract":"<p><p>Recent breakthroughs in deep learning using automated measurement of face and head motion have made possible the first objective measurement of depression severity. While powerful, deep learning approaches lack interpretability. We developed an interpretable method of automatically measuring depression severity that uses barycentric coordinates of facial landmarks and a Lie-algebra based rotation matrix of 3D head motion. Using these representations, kinematic features are extracted, preprocessed, and encoded using Gaussian Mixture Models (GMM) and Fisher vector encoding. A multi-class SVM is used to classify the encoded facial and head movement dynamics into three levels of depression severity. The proposed approach was evaluated in adults with history of chronic depression. The method approached the classification accuracy of state-of-the-art deep learning while enabling clinically and theoretically relevant findings. The velocity and acceleration of facial movement strongly mapped onto depression severity symptoms consistent with clinical data and theory.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6157749/pdf/nihms950419.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36538326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning for Action and Gesture Recognition in Image Sequences: A Survey 图像序列中动作和手势识别的深度学习:综述
Maryam Asadi-Aghbolaghi, Albert Clapés, M. Bellantonio, H. Escalante, V. Ponce-López, Xavier Baró, Isabelle M Guyon, S. Kasaei, Sergio Escalera
{"title":"Deep Learning for Action and Gesture Recognition in Image Sequences: A Survey","authors":"Maryam Asadi-Aghbolaghi, Albert Clapés, M. Bellantonio, H. Escalante, V. Ponce-López, Xavier Baró, Isabelle M Guyon, S. Kasaei, Sergio Escalera","doi":"10.1007/978-3-319-57021-1_19","DOIUrl":"https://doi.org/10.1007/978-3-319-57021-1_19","url":null,"abstract":"","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80009798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge. FERA 2017 -在第三次面部表情识别和分析挑战中解决头部姿势。
Michel F Valstar, Enrique Sánchez-Lozano, Jeffrey F Cohn, László A Jeni, Jeffrey M Girard, Zheng Zhang, Lijun Yin, Maja Pantic
{"title":"FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge.","authors":"Michel F Valstar, Enrique Sánchez-Lozano, Jeffrey F Cohn, László A Jeni, Jeffrey M Girard, Zheng Zhang, Lijun Yin, Maja Pantic","doi":"10.1109/FG.2017.107","DOIUrl":"10.1109/FG.2017.107","url":null,"abstract":"<p><p>The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/FG.2017.107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35967120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 123
Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database. Sayette Group Formation Task (GFT)自发面部表情数据库。
Jeffrey M Girard, Wen-Sheng Chu, László A Jeni, Jeffrey F Cohn, Fernando De la Torre, Michael A Sayette
{"title":"Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database.","authors":"Jeffrey M Girard, Wen-Sheng Chu, László A Jeni, Jeffrey F Cohn, Fernando De la Torre, Michael A Sayette","doi":"10.1109/FG.2017.144","DOIUrl":"10.1109/FG.2017.144","url":null,"abstract":"<p><p>Despite the important role that facial expressions play in interpersonal communication and our knowledge that interpersonal behavior is influenced by social context, no currently available facial expression database includes multiple interacting participants. The Sayette Group Formation Task (GFT) database addresses the need for well-annotated video of multiple participants during unscripted interactions. The database includes 172,800 video frames from 96 participants in 32 three-person groups. To aid in the development of automated facial expression analysis systems, GFT includes expert annotations of FACS occurrence and intensity, facial landmark tracking, and baseline results for linear SVM, deep learning, active patch learning, and personalized classification. Baseline performance is quantified and compared using identical partitioning and a variety of metrics (including means and confidence intervals). The highest performance scores were found for the deep learning and active patch learning methods. Learn more at http://osf.io/7wcyz.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/FG.2017.144","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35966631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Challenges in Multi-modal Gesture Recognition 多模态手势识别中的挑战
Sergio Escalera, V. Athitsos, Isabelle M Guyon
{"title":"Challenges in Multi-modal Gesture Recognition","authors":"Sergio Escalera, V. Athitsos, Isabelle M Guyon","doi":"10.1007/978-3-319-57021-1_1","DOIUrl":"https://doi.org/10.1007/978-3-319-57021-1_1","url":null,"abstract":"","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88215947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis. 社会风险与抑郁:手动和自动面部表情分析的证据
Jeffrey M Girard, Jeffrey F Cohn, Mohammad H Mahoor, Seyedmohammad Mavadati, Dean P Rosenwald
{"title":"Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis.","authors":"Jeffrey M Girard, Jeffrey F Cohn, Mohammad H Mahoor, Seyedmohammad Mavadati, Dean P Rosenwald","doi":"10.1109/FG.2013.6553748","DOIUrl":"10.1109/FG.2013.6553748","url":null,"abstract":"<p><p>Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the \"social risk hypothesis\" of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3935843/pdf/nihms555449.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40286185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Avatar Animation from a Single Image. 通过单张图像制作实时头像动画
Jason M Saragih, Simon Lucey, Jeffrey F Cohn
{"title":"Real-time Avatar Animation from a Single Image.","authors":"Jason M Saragih, Simon Lucey, Jeffrey F Cohn","doi":"10.1109/FG.2011.5771383","DOIUrl":"10.1109/FG.2011.5771383","url":null,"abstract":"<p><p>A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3935737/pdf/nihms-554963.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40285898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信