IEEE International Conference on Automatic Face & Gesture Recognition and Workshops最新文献

筛选
英文 中文
The Proper Treatment of Linguistic Ambiguity in Ordinary Algebra 普通代数中语言歧义的正确处理
IEEE International Conference on Automatic Face & Gesture Recognition and Workshops Pub Date : 2015-08-01 DOI: 10.1007/978-3-662-53042-9_18
C. Wurm, Timm Lichte
{"title":"The Proper Treatment of Linguistic Ambiguity in Ordinary Algebra","authors":"C. Wurm, Timm Lichte","doi":"10.1007/978-3-662-53042-9_18","DOIUrl":"https://doi.org/10.1007/978-3-662-53042-9_18","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"134 1","pages":"306-322"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78037931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Single Movement Normal Form for Minimalist Grammars 极简语法的单一运动范式
IEEE International Conference on Automatic Face & Gesture Recognition and Workshops Pub Date : 2015-08-01 DOI: 10.1007/978-3-662-53042-9_12
T. Graf, Alëna Aksënova, Aniello De Santo
{"title":"A Single Movement Normal Form for Minimalist Grammars","authors":"T. Graf, Alëna Aksënova, Aniello De Santo","doi":"10.1007/978-3-662-53042-9_12","DOIUrl":"https://doi.org/10.1007/978-3-662-53042-9_12","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"35 1","pages":"200-215"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90627464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Foreword - Biometrics in the Wild 2015 前言-生物识别技术在野外2015
B. Bhanu, A. Hadid, Q. Ji, M. Nixon, V. Štruc
{"title":"Foreword - Biometrics in the Wild 2015","authors":"B. Bhanu, A. Hadid, Q. Ji, M. Nixon, V. Štruc","doi":"10.1109/FG.2015.7284809","DOIUrl":"https://doi.org/10.1109/FG.2015.7284809","url":null,"abstract":"The first International Workshop on Biometrics in the Wild (B-Wild 2015) was held on May 8th, 2015 in conjunction with the 11th IEEE International Conference on Automatic Face and Gesture Recognition (IEEE FG-2015) in Ljubljana, Slovenia. The goal of the workshop was to present the most advanced work related to biometric recognition in the wild and to bring recent advances from this field to the attention of the broader FG community.","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"10 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2015-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89944218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FERA 2014 chairs' welcome 欢迎2014年FERA主席
M. Valstar, G. McKeown, M. Mehu, L. Yin, M. Pantic, J. Cohn
{"title":"FERA 2014 chairs' welcome","authors":"M. Valstar, G. McKeown, M. Mehu, L. Yin, M. Pantic, J. Cohn","doi":"10.1109/FG.2015.7284866","DOIUrl":"https://doi.org/10.1109/FG.2015.7284866","url":null,"abstract":"It is our great pleasure to welcome you to the 2d Facial Expression Recognition and Analysis challenge and workshop (FERA 2015), held in conjunction with the 11th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2015). It's been four years since the first facial expression recognition challenge (FERA 2011), and we're excited to come back to challenge researchers worldwide to go ever further in the automatic recognition of facial expressions. This year's challenge and associated workshop pushes the boundaries of expression recognition by focusing on the estimation of FACS Facial Action Unit intensity, as well as regular frame-based occurrence detection. The challenge is set on previously unreleased data of extensive duration (over 350,000 annotated frames) of relatively naturalistic scenarios taken from the BP4D and SEMAINE databases.","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"1 1","pages":"iii"},"PeriodicalIF":0.0,"publicationDate":"2015-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88936141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Three Dimensional Binary Edge Feature Representation for Pain Expression Analysis. 用于疼痛表情分析的三维二值边缘特征表示。
IEEE International Conference on Automatic Face & Gesture Recognition and Workshops Pub Date : 2015-05-01 Epub Date: 2015-07-23 DOI: 10.1109/fg.2015.7163107
Xing Zhang, Lijun Yin, Jeffrey F Cohn
{"title":"Three Dimensional Binary Edge Feature Representation for Pain Expression Analysis.","authors":"Xing Zhang,&nbsp;Lijun Yin,&nbsp;Jeffrey F Cohn","doi":"10.1109/fg.2015.7163107","DOIUrl":"https://doi.org/10.1109/fg.2015.7163107","url":null,"abstract":"<p><p>Automatic pain expression recognition is a challenging task for pain assessment and diagnosis. Conventional 2D-based approaches to automatic pain detection lack robustness to the moderate to large head pose variation and changes in illumination that are common in real-world settings and with few exceptions omit potentially informative temporal information. In this paper, we propose an innovative 3D binary edge feature (3D-BE) to represent high-resolution 3D dynamic facial expression. To exploit temporal information, we apply a latent-dynamic conditional random field approach with the 3D-BE. The resulting pain expression detection system proves that 3D-BE represents the pain facial features well, and illustrates the potential of noncontact pain detection from 3D facial expression data.</p>","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"2015 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/fg.2015.7163107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38146268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Dense 3D Face Alignment from 2D Videos in Real-Time. 密集的3D面部对齐从2D视频在实时。
László A Jeni, Jeffrey F Cohn, Takeo Kanade
{"title":"Dense 3D Face Alignment from 2D Videos in Real-Time.","authors":"László A Jeni,&nbsp;Jeffrey F Cohn,&nbsp;Takeo Kanade","doi":"10.1109/FG.2015.7163142","DOIUrl":"https://doi.org/10.1109/FG.2015.7163142","url":null,"abstract":"<p><p>To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.</p>","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/FG.2015.7163142","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34570073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 173
How much training data for facial action unit detection? 面部动作单元检测的训练数据有多少?
Jeffrey M Girard, Jeffrey F Cohn, László A Jeni, Simon Lucey, Fernando De la Torre
{"title":"How much training data for facial action unit detection?","authors":"Jeffrey M Girard,&nbsp;Jeffrey F Cohn,&nbsp;László A Jeni,&nbsp;Simon Lucey,&nbsp;Fernando De la Torre","doi":"10.1109/FG.2015.7163106","DOIUrl":"https://doi.org/10.1109/FG.2015.7163106","url":null,"abstract":"<p><p>By systematically varying the number of subjects and the number of frames per subject, we explored the influence of training set size on appearance and shape-based approaches to facial action unit (AU) detection. Digital video and expert coding of spontaneous facial activity from 80 subjects (over 350,000 frames) were used to train and test support vector machine classifiers. Appearance features were shape-normalized SIFT descriptors and shape features were 66 facial landmarks. Ten-fold cross-validation was used in all evaluations. Number of subjects and number of frames per subject differentially affected appearance and shape-based classifiers. For appearance features, which are high-dimensional, increasing the number of training subjects from 8 to 64 incrementally improved performance, regardless of the number of frames taken from each subject (ranging from 450 through 3600). In contrast, for shape features, increases in the number of training subjects and frames were associated with mixed results. In summary, maximal performance was attained using appearance features from large numbers of subjects with as few as 450 frames per subject. These findings suggest that variation in the number of subjects rather than number of frames per subject yields most efficient performance.</p>","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/FG.2015.7163106","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34558406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
IntraFace. IntraFace。
Fernando De la Torre, Wen-Sheng Chu, Xuehan Xiong, Francisco Vicente, Xiaoyu Ding, Jeffrey Cohn
{"title":"IntraFace.","authors":"Fernando De la Torre, Wen-Sheng Chu, Xuehan Xiong, Francisco Vicente, Xiaoyu Ding, Jeffrey Cohn","doi":"10.1109/FG.2015.7163082","DOIUrl":"10.1109/FG.2015.7163082","url":null,"abstract":"<p><p>Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.</p>","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4918819/pdf/nihms-751967.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34612877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Cultural Detection of Depression from Nonverbal Behaviour. 非语言行为对抑郁的跨文化检测。
Sharifa Alghowinem, Roland Goecke, Jeffrey F Cohn, Michael Wagner, Gordon Parker, Michael Breakspear
{"title":"Cross-Cultural Detection of Depression from Nonverbal Behaviour.","authors":"Sharifa Alghowinem,&nbsp;Roland Goecke,&nbsp;Jeffrey F Cohn,&nbsp;Michael Wagner,&nbsp;Gordon Parker,&nbsp;Michael Breakspear","doi":"10.1109/FG.2015.7163113","DOIUrl":"https://doi.org/10.1109/FG.2015.7163113","url":null,"abstract":"<p><p>Millions of people worldwide suffer from depression. Do commonalities exist in their nonverbal behavior that would enable cross-culturally viable screening and assessment of severity? We investigated the generalisability of an approach to detect depression severity cross-culturally using video-recorded clinical interviews from Australia, the USA and Germany. The material varied in type of interview, subtypes of depression and inclusion healthy control subjects, cultural background, and recording environment. The analysis focussed on temporal features of participants' eye gaze and head pose. Several approaches to training and testing within and between datasets were evaluated. The strongest results were found for training across all datasets and testing across datasets using leave-one-subject-out cross-validation. In contrast, generalisability was attenuated when training on only one or two of the three datasets and testing on subjects from the dataset(s) not used in training. These findings highlight the importance of using training data exhibiting the expected range of variability.</p>","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/FG.2015.7163113","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34699799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
The Conjoinability Relation in Discontinuous Lambek Calculus 不连续Lambek微积分中的可合性关系
IEEE International Conference on Automatic Face & Gesture Recognition and Workshops Pub Date : 2014-08-16 DOI: 10.1007/978-3-662-44121-3_11
A. Sorokin
{"title":"The Conjoinability Relation in Discontinuous Lambek Calculus","authors":"A. Sorokin","doi":"10.1007/978-3-662-44121-3_11","DOIUrl":"https://doi.org/10.1007/978-3-662-44121-3_11","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"23 1","pages":"171-184"},"PeriodicalIF":0.0,"publicationDate":"2014-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81488662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信