2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)最新文献

筛选
英文 中文
Facial Geometric Detail Recovery via Implicit Representation 基于隐式表示的面部几何细节恢复
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-03-18 DOI: 10.1109/FG57933.2023.10042505
Xingyu Ren, Alexandros Lattas, Baris Gecer, Jiankang Deng, Chao Ma, Xiaokang Yang, S. Zafeiriou
{"title":"Facial Geometric Detail Recovery via Implicit Representation","authors":"Xingyu Ren, Alexandros Lattas, Baris Gecer, Jiankang Deng, Chao Ma, Xiaokang Yang, S. Zafeiriou","doi":"10.1109/FG57933.2023.10042505","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042505","url":null,"abstract":"Learning a dense 3D model with fine-scale details from a single facial image is highly challenging and ill-posed. To address this problem, many approaches fit smooth geometries through facial prior while learning details as additional displacement maps or personalized basis. However, these techniques typically require vast datasets of paired multi-view data or 3D scans, whereas such datasets are scarce and expensive. To alleviate heavy data dependency, we present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image. Specifically, we inpaint occluded facial parts, generate complete textures, and build an accurate multi-view dataset of the target subject. In order to estimate the detailed geometry, we define an implicit signed distance function and employ a physically-based implicit renderer to reconstruct fine geometric details from the generated multiview images. Our method not only recovers accurate facial details but also decomposes the diffuse and specular albedo, normals and shading components in a self-supervised way. Finally, we register the implicit shape details to a 3D Morphable Model template, which can be used in traditional modeling and rendering pipelines. Extensive experiments demonstrate that the proposed approach can reconstruct impressive facial details from a single image, especially when compared with state-of-the-art methods trained on large datasets.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128244430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
S2F2: Self-Supervised High Fidelity Face Reconstruction from Monocular Image [2]基于单目图像的自监督高保真人脸重建
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-03-15 DOI: 10.1109/FG57933.2023.10042713
Abdallah Dib, Junghyun Ahn, C. Thébault, P. Gosselin, Louis Chevallier
{"title":"S2F2: Self-Supervised High Fidelity Face Reconstruction from Monocular Image","authors":"Abdallah Dib, Junghyun Ahn, C. Thébault, P. Gosselin, Louis Chevallier","doi":"10.1109/FG57933.2023.10042713","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042713","url":null,"abstract":"We present a novel face reconstruction method capable of reconstructing detailed face geometry, spatially varying face reflectance from a single monocular image. We build our work upon the recent advances of DNN-based auto-encoders with differentiable ray tracing image formation, trained in self-supervised manner. While providing the advantage of learning-based approaches and real-time reconstruction, the latter methods lacked fidelity. In this work, we achieve, for the first time, high fidelity face reconstruction using self-supervised learning only. Our novel coarse-to-fine deep architecture allows us to solve the challenging problem of decoupling face reflectance from geometry using a single image, at high computational speed. Compared to state-of-the-art methods, our method achieves more visually appealing reconstruction.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124037469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
AFFDEX 2.0: A Real-Time Facial Expression Analysis Toolkit AFFDEX 2.0:一个实时面部表情分析工具包
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-02-24 DOI: 10.1109/FG57933.2023.10042673
M. Bishay, Kenneth Preston, Matthew Strafuss, Graham Page, Jay Turcot, Mohammad Mavadati
{"title":"AFFDEX 2.0: A Real-Time Facial Expression Analysis Toolkit","authors":"M. Bishay, Kenneth Preston, Matthew Strafuss, Graham Page, Jay Turcot, Mohammad Mavadati","doi":"10.1109/FG57933.2023.10042673","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042673","url":null,"abstract":"In this paper we introduce AFFDEX 2.0 – a toolkit for analyzing facial expressions in the wild, that is, it is intended for users aiming to; a) estimate the 3D head pose, b) detect facial Action Units (AUs), c) recognize basic emotions and 2 new emotional states (sentimentality and confusion), and d) detect high-level expressive metrics like blink and attention. AFFDEX 2.0 models are mainly based on Deep Learning, and are trained using a large-scale naturalistic dataset consisting of thousands of participants from different demographic groups. AFFDEX 2.0 is an enhanced version of our previous toolkit [36], that is capable of tracking faces at challenging conditions, detecting more accurately facial expressions, and recognizing new emotional states (sentimentality and confusion). AFFDEX 2.0 outperforms the state-of-the-art methods in AU detection and emotion recognition. AFFDEX 2.0 can process multiple faces in real time, and is working across the Windows and Linux platforms.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121365961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Segmentation-Reconstruction-Guided Facial Image De-occlusion 分割-重建引导的人脸图像去遮挡
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2021-12-15 DOI: 10.1109/FG57933.2023.10042570
Xiangnan Yin, Di Huang, Zehua Fu, Yunhong Wang, Liming Luke Chen
{"title":"Segmentation-Reconstruction-Guided Facial Image De-occlusion","authors":"Xiangnan Yin, Di Huang, Zehua Fu, Yunhong Wang, Liming Luke Chen","doi":"10.1109/FG57933.2023.10042570","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042570","url":null,"abstract":"Occlusions are very common in face images in the wild, leading to the degraded performance of face-related tasks. Although much effort has been devoted to removing occlusions from face images, the varying shapes and textures of occlusions still challenge the robustness of current methods. As a result, current methods either rely on manual occlusion masks or only apply to specific occlusions. This paper proposes a novel face de-occlusion model based on face segmentation and 3D face reconstruction, which is robust to arbitrary kinds of face occlusions. The proposed model consists of a 3D face reconstruction module, a face segmentation module, and an image generation module. With the face prior and the occlusion mask predicted by the first two, respectively, the image generation module can faithfully recover the missing facial textures. To supervise the training, we further build a large occlusion dataset, with both manually labeled and synthetic occlusions. Qualitative and quantitative results demonstrate the effectiveness and robustness of the proposed method.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131669289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
FLAME-in-NeRF: Neural control of Radiance Fields for Free View Face Animation 火焰-in- nerf:自由视图面部动画的辐射场的神经控制
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2021-08-10 DOI: 10.1109/FG57933.2023.10042553
ShahRukh Athar, Zhixin Shu, D. Samaras
{"title":"FLAME-in-NeRF: Neural control of Radiance Fields for Free View Face Animation","authors":"ShahRukh Athar, Zhixin Shu, D. Samaras","doi":"10.1109/FG57933.2023.10042553","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042553","url":null,"abstract":"This paper presents a neural rendering method for controllable portrait video synthesis. Recent advances in volumetric neural rendering, such as neural radiance fields (NeRF), have enabled the photorealistic novel view synthesis of static scenes with impressive results. However, modeling dynamic and controllable objects as part of a scene with such scene representations is still challenging. In this work, we design a system that enables 1) novel view synthesis for portrait video, of both the human subject and the scene they are in and 2) explicit control of the facial expressions through a low-dimensional expression representation. We represent the distribution of human facial expressions using the expression parameters of a 3D Morphable Model (3DMM) and condition the NeRF volumetric function on them. In order to guide the network to learn disentangled control for static scene appearance and dynamic facial actions, we impose a spatial prior via 3DMM fitting. We show the effectiveness of our method on free view synthesis of portrait videos with expression controls. To train a scene, our method only requires a short video of a subject captured by a mobile device.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116375125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Weakly-Supervised Photo-realistic Texture Generation for 3D Face Reconstruction 用于三维人脸重建的弱监督照片真实感纹理生成
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2021-06-14 DOI: 10.1109/FG57933.2023.10042545
Xiangnan Yin, Di Huang, Zehua Fu, Yunhong Wang, Liming Chen
{"title":"Weakly-Supervised Photo-realistic Texture Generation for 3D Face Reconstruction","authors":"Xiangnan Yin, Di Huang, Zehua Fu, Yunhong Wang, Liming Chen","doi":"10.1109/FG57933.2023.10042545","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042545","url":null,"abstract":"Although much progress has been made recently in 3D face reconstruction, most previous work has been devoted to predicting accurate and fine-grained 3D shapes. In contrast, relatively little work has focused on generating high-fidelity face textures. Compared with the prosperity of photo-realistic 2D face image generation, high-fidelity 3D face texture generation has yet to be studied. In this paper, we propose a novel UV map generation model that predicts the UV map from a single face image. The model consists of a UV sampler and a UV generator. By selectively sampling the input face image's pixels and adjusting their relative locations, the UV sampler generates an incomplete UV map that could faithfully reconstruct the original face. Missing textures in the incomplete UV map are further full-filled by the UV generator. The training is based on pseudo ground truth blended by the 3DMM texture and the input face texture, thus weakly supervised. To deal with the artifacts in the imperfect pseudo UV map, multiple UV map and face image discriminators are leveraged.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114581114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Unified Detection of Digital and Physical Face Attacks 数字和物理面部攻击的统一检测
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2021-04-05 DOI: 10.1109/FG57933.2023.10042500
Debayan Deb, Xiaoming Liu, Anil K. Jain
{"title":"Unified Detection of Digital and Physical Face Attacks","authors":"Debayan Deb, Xiaoming Liu, Anil K. Jain","doi":"10.1109/FG57933.2023.10042500","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042500","url":null,"abstract":"State-of-the-art defense mechanisms against face attacks achieve near perfect accuracies within one of three attack categories, namely adversarial, digital manipulation, or physical spoofs, however, they fail to generalize well when tested across all three categories. Poor generalization can be attributed to learning incoherent attacks jointly. To over-come this shortcoming, we propose a unified attack detection framework, namely UniFAD, that can automatically cluster 25 coherent attack types belonging to the three categories. Using a multi-task learning framework along with k-means clustering, UniFAD learns joint representations for coherent attacks, while uncorrelated attack types are learned separately. Proposed UniFAD outperforms prevailing defense methods and their fusion with an overall TDR = 94.73% @ 0.2% FDR on a large fake face dataset consisting of 341K bona fide images and 448K attack images of 25 types across all 3 categories. Proposed method can detect an attack within 3 milliseconds on a Nvidia 2080Ti. UniFAD can also identify the attack categories with 97.37% accuracy. Code and dataset will be publicly available.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129548321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
FaceGuard: A Self-Supervised Defense Against Adversarial Face Images FaceGuard:对敌对面部图像的自我监督防御
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2020-11-28 DOI: 10.1109/FG57933.2023.10042617
Debayan Deb, Xiaoming Liu, Anil K. Jain
{"title":"FaceGuard: A Self-Supervised Defense Against Adversarial Face Images","authors":"Debayan Deb, Xiaoming Liu, Anil K. Jain","doi":"10.1109/FG57933.2023.10042617","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042617","url":null,"abstract":"Prevailing defense schemes against adversarial face images tend to overfit to the perturbations in the training set and fail to generalize to unseen adversarial attacks. We propose a new self-supervised adversarial defense framework, namely FaceGuard, that can automatically detect, localize, and purify a wide variety of adversarial faces without utilizing pre-computed adversarial training samples. During training, FaceGuard automatically synthesizes challenging and diverse adversarial attacks, enabling a classifier to learn to distinguish them from real faces. Concurrently, a purifier attempts to remove the adversarial perturbations in the image space. Experimental results on LFW, Celeb-A, and FFHQ datasets show that FaceGuard can achieve 99.81%, 98.73%, and 99.35% detection accuracies, respectively, on six unseen adversarial attack types. In addition, the proposed method can enhance the face recognition performance of ArcFace from 34.27% TAR @ 0.1% FAR under no defense to 77.46% TAR @ 0.1% FAR. Code, pre-trained models and dataset will be publicly available.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115975397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信