Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition最新文献
Jeffrey M Girard, Jeffrey F Cohn, Mohammad H Mahoor, Seyedmohammad Mavadati, Dean P Rosenwald
{"title":"Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis.","authors":"Jeffrey M Girard, Jeffrey F Cohn, Mohammad H Mahoor, Seyedmohammad Mavadati, Dean P Rosenwald","doi":"10.1109/FG.2013.6553748","DOIUrl":"10.1109/FG.2013.6553748","url":null,"abstract":"<p><p>Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the \"social risk hypothesis\" of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":" ","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3935843/pdf/nihms555449.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40286185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time Avatar Animation from a Single Image.","authors":"Jason M Saragih, Simon Lucey, Jeffrey F Cohn","doi":"10.1109/FG.2011.5771383","DOIUrl":"10.1109/FG.2011.5771383","url":null,"abstract":"<p><p>A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":" ","pages":"117-124"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3935737/pdf/nihms-554963.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40285898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deformable Face Fitting with Soft Correspondence Constraints.","authors":"Jason M Saragih, Simon Lucey, Jeffrey F Cohn","doi":"10.1109/AFGR.2008.4813374","DOIUrl":"10.1109/AFGR.2008.4813374","url":null,"abstract":"<p><p>Despite significant progress in deformable model fitting over the last decade, the problem of efficient and accurate person-independent face fitting remains a challenging problem. In this work, a reformulation of the generative fitting objective is presented, where only soft correspondences between the model and the image are enforced. This has the dual effect of improving robustness to unseen faces as well as affording fitting time which scales linearly with the model's complexity. This approach is compared with three state-of-the-art fitting methods on the problem of person independent face fitting, where it is shown to closely approach the accuracy of the currently best performing method while affording significant computational savings.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":"1 ","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2009-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2856958/pdf/nihms99715.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"28939754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}