International Conference on Affective Computing and Intelligent Interaction and workshops : [proceedings]. ACII (Conference)最新文献

筛选
英文 中文
Action Unit Models of Facial Expression of Emotion in the Presence of Speech. 言语存在时面部情绪表达的动作单元模型。
Miraj Shah, David G Cooper, Houwei Cao, Ruben C Gur, Ani Nenkova, Ragini Verma
{"title":"Action Unit Models of Facial Expression of Emotion in the Presence of Speech.","authors":"Miraj Shah,&nbsp;David G Cooper,&nbsp;Houwei Cao,&nbsp;Ruben C Gur,&nbsp;Ani Nenkova,&nbsp;Ragini Verma","doi":"10.1109/ACII.2013.15","DOIUrl":"https://doi.org/10.1109/ACII.2013.15","url":null,"abstract":"<p><p>Automatic recognition of emotion using facial expressions in the presence of speech poses a unique challenge because talking reveals clues for the affective state of the speaker but distorts the canonical expression of emotion on the face. We introduce a corpus of acted emotion expression where speech is either present (talking) or absent (silent). The corpus is uniquely suited for analysis of the interplay between the two conditions. We use a multimodal decision level fusion classifier to combine models of emotion from talking and silent faces as well as from audio to recognize five basic emotions: anger, disgust, fear, happy and sad. Our results strongly indicate that emotion prediction in the presence of speech from action unit facial features is less accurate when the person is talking. Modeling talking and silent expressions separately and fusing the two models greatly improves accuracy of prediction in the talking setting. The advantages are most pronounced when silent and talking face models are fused with predictions from audio features. In this multi-modal prediction both the combination of modalities and the separate models of talking and silent facial expression of emotion contribute to the improvement.</p>","PeriodicalId":89154,"journal":{"name":"International Conference on Affective Computing and Intelligent Interaction and workshops : [proceedings]. ACII (Conference)","volume":"2013 ","pages":"49-54"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ACII.2013.15","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32920532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Facing Imbalanced Data Recommendations for the Use of Performance Metrics. 面对不平衡的数据,关于使用绩效指标的建议。
László A Jeni, Jeffrey F Cohn, Fernando De La Torre
{"title":"Facing Imbalanced Data Recommendations for the Use of Performance Metrics.","authors":"László A Jeni, Jeffrey F Cohn, Fernando De La Torre","doi":"10.1109/ACII.2013.47","DOIUrl":"10.1109/ACII.2013.47","url":null,"abstract":"<p><p>Recognizing facial action units (AUs) is important for situation analysis and automated video annotation. Previous work has emphasized face tracking and registration and the choice of features classifiers. Relatively neglected is the effect of imbalanced data for action unit detection. While the machine learning community has become aware of the problem of skewed data for training classifiers, little attention has been paid to how skew may bias performance metrics. To address this question, we conducted experiments using both simulated classifiers and three major databases that differ in size, type of FACS coding, and degree of skew. We evaluated influence of skew on both threshold metrics (Accuracy, F-score, Cohen's kappa, and Krippendorf's alpha) and rank metrics (area under the receiver operating characteristic (ROC) curve and precision-recall curve). With exception of area under the ROC curve, all were attenuated by skewed distributions, in many cases, dramatically so. While ROC was unaffected by skew, precision-recall curves suggest that ROC may mask poor performance. Our findings suggest that skew is a critical factor in evaluating performance metrics. To avoid or minimize skew-biased estimates of performance, we recommend reporting skew-normalized scores along with the obtained ones.</p>","PeriodicalId":89154,"journal":{"name":"International Conference on Affective Computing and Intelligent Interaction and workshops : [proceedings]. ACII (Conference)","volume":"2013 ","pages":"245-251"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4285355/pdf/nihms-554962.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32965305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatically Detecting Pain Using Facial Actions. 使用面部动作自动检测疼痛。
Patrick Lucey, Jeffrey Cohn, Simon Lucey, Iain Matthews, Sridha Sridharan, Kenneth M Prkachin
{"title":"Automatically Detecting Pain Using Facial Actions.","authors":"Patrick Lucey,&nbsp;Jeffrey Cohn,&nbsp;Simon Lucey,&nbsp;Iain Matthews,&nbsp;Sridha Sridharan,&nbsp;Kenneth M Prkachin","doi":"10.1109/ACII.2009.5349321","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349321","url":null,"abstract":"<p><p>Pain is generally measured by patient self-report, normally via verbal communication. However, if the patient is a child or has limited ability to communicate (i.e. the mute, mentally impaired, or patients having assisted breathing) self-report may not be a viable measurement. In addition, these self-report measures only relate to the maximum pain level experienced during a sequence so a frame-by-frame measure is currently not obtainable. Using image data from patients with rotator-cuff injuries, in this paper we describe an AAM-based automatic system which can detect pain on a frame-by-frame level. We do this two ways: directly (straight from the facial features); and indirectly (through the fusion of individual AU detectors). From our results, we show that the latter method achieves the optimal results as most discriminant features from each AU detector (i.e. shape or appearance) are used.</p>","PeriodicalId":89154,"journal":{"name":"International Conference on Affective Computing and Intelligent Interaction and workshops : [proceedings]. ACII (Conference)","volume":"2009 ","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2009-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ACII.2009.5349321","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29636088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
Get The FACS Fast: Automated FACS face analysis benefits from the addition of velocity. 快速获得FACS:自动FACS面部分析受益于速度的增加。
Timothy R Brick, Michael D Hunter, Jeffrey F Cohn
{"title":"Get The FACS Fast: Automated FACS face analysis benefits from the addition of velocity.","authors":"Timothy R Brick,&nbsp;Michael D Hunter,&nbsp;Jeffrey F Cohn","doi":"10.1109/ACII.2009.5349600","DOIUrl":"https://doi.org/10.1109/ACII.2009.5349600","url":null,"abstract":"<p><p>Much progress has been made in automated facial image analysis, yet current approaches still lag behind what is possible using manual labeling of facial actions. While many factors may contribute, a key one may be the limited attention to dynamics of facial action. Most approaches classify frames in terms of either displacement from a neutral, mean face or, less frequently, displacement between successive frames (i.e. velocity). In the current paper, we evaluated the hypothesis that attention to dynamics can boost recognition rates. Using the well-known Cohn-Kanade database and support vector machines, adding velocity and acceleration decreased the number of incorrectly classified results by 14.2% and 11.2%, respectively. Average classification accuracy for the displacement and velocity classifier system across all classifiers was 90.2%. Findings were replicated using linear discriminant analysis, and found a mean decrease of 16.4% in incorrect classifications across classifiers. These findings suggest that information about the dynamics of a movement, that is, the velocity and to a lesser extent the acceleration of a change, can helpfully inform classification of facial expressions.</p>","PeriodicalId":89154,"journal":{"name":"International Conference on Affective Computing and Intelligent Interaction and workshops : [proceedings]. ACII (Conference)","volume":"10-12 Sept 2009","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2009-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ACII.2009.5349600","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29663760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信