2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)最新文献

筛选
英文 中文
Investigating Visual Features for Cognitive Impairment Detection Using In-the-wild Data 利用野外数据研究认知障碍检测的视觉特征
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2023-01-05 DOI: 10.1109/FG57933.2023.10042506
F. Alzahrani, B. Mirheidari, Daniel Blackburn, Steve Maddock, H. Christensen
{"title":"Investigating Visual Features for Cognitive Impairment Detection Using In-the-wild Data","authors":"F. Alzahrani, B. Mirheidari, Daniel Blackburn, Steve Maddock, H. Christensen","doi":"10.1109/FG57933.2023.10042506","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042506","url":null,"abstract":"Early detection of dementia has attracted much research interest due to its crucial role in helping people get suitable treatment or care. Video analysis may provide an effective approach for detection, with low cost and effort compared to current expensive and intensive clinical assessments. This paper investigates the use of a range of visual features - eye blink rate (EBR), head turn rate (HTR) and head movement statistical features (HMSF) - for identifying neurodegenerative disorder (ND), mild cognitive impairment (MCI) and functional memory disorder (FMD). These features are used in a noval multiple thresholds approach, which is applied to an in-the-wild video dataset which includes data recorded in a range of challenging environments. A combination of EBR and HTR gives 78 % accuracy in a three-way classification task (ND/MCI/FMD) and 83%, 83% and 92%, respectively, for the two-way classifications ND/MCI, ND/FMD and MCI/FMD. These results are comparable to related work that uses more features from different modalities. They also provide evidence to support the possibility of an in-the-home detection process for dementia or cognitive impairment.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127160243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latent Generative Replay for Resource-Efficient Continual Learning of Facial Expressions 基于资源高效的面部表情持续学习的潜在生成重放
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2023-01-05 DOI: 10.1109/FG57933.2023.10042642
Samuil Stoychev, Nikhil Churamani, H. Gunes
{"title":"Latent Generative Replay for Resource-Efficient Continual Learning of Facial Expressions","authors":"Samuil Stoychev, Nikhil Churamani, H. Gunes","doi":"10.1109/FG57933.2023.10042642","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042642","url":null,"abstract":"Real-world Facial Expression Recognition (FER) systems require models to constantly learn and adapt with novel data. Traditional Machine Learning (ML) approaches struggle to adapt to such dynamics as models need to be re-trained from scratch with a combination of both old and new data. Replay-based Continual Learning (CL) provides a solution to this problem, either by storing previously seen data samples in memory, sampling and interleaving them with novel data (rehearsal) or by using a generative model to simulate pseudo-samples to replay past knowledge (pseudo-rehearsal). Yet, the high memory footprint of rehearsal and the high computational cost of pseudo-rehearsal limit the real-world application of such methods, especially on resource-constrained devices. To address this, we propose Latent Generative Replay (LGR) for pseudo-rehearsal of low-dimensional latent features to mitigate forgetting in a resource-efficient manner. We adapt popular CL strategies to use LGR instead of generating pseudo-samples, resulting in performance upgrades when evaluated on the CK+, RAF-DB and AffectNet FER benchmarks where LGR significantly reduces the memory and resource consumption of replay-based CL without compromising model performance.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126388460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multi-Zone Transformer Based on Self-Distillation for Facial Attribute Recognition 基于自蒸馏的多区域变压器人脸属性识别
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2023-01-05 DOI: 10.1109/FG57933.2023.10042513
Si Chen, Xueyan Zhu, Da-han Wang, Shunzhi Zhu, Yun Wu
{"title":"Multi-Zone Transformer Based on Self-Distillation for Facial Attribute Recognition","authors":"Si Chen, Xueyan Zhu, Da-han Wang, Shunzhi Zhu, Yun Wu","doi":"10.1109/FG57933.2023.10042513","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042513","url":null,"abstract":"Recently, transformers have shown great promising performance in various computer vision tasks. However, the current transformer based methods ignore the information exchanges between transformer blocks, and they have not been applied in the facial attribute recognition task. In this paper, we propose a multi-zone transformer based on self-distillation for FAR, termed MZTS, to predict the facial attributes. A multi-zone transformer encoder is firstly presented to achieve the interactions of the different transformer encoder blocks, thus avoiding forgetting the effective information between the transformer encoder block groups during the iteration process. Furthermore, we introduce a new self-distillation mechanism based on class tokens, which distills the class tokens obtained from the last transformer encoder block group to the other shallow groups by interacting with the significant information between the different transformer blocks through attention. Extensive experiments on the challenging CelebA and LFWA datasets have demonstrated the excellent performance of the proposed method for FAR.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125953319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TransFS: Face Swapping Using Transformer TransFS:使用变压器换脸
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2023-01-05 DOI: 10.1109/FG57933.2023.10042556
Wei Cao, Tianyi Wang, Anming Dong, Minglei Shu
{"title":"TransFS: Face Swapping Using Transformer","authors":"Wei Cao, Tianyi Wang, Anming Dong, Minglei Shu","doi":"10.1109/FG57933.2023.10042556","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042556","url":null,"abstract":"This paper proposes a Transformer based face swapping model, namely, TransFS. The proposed model mainly solves two current problems of face swapping: 1) the face swapping result does not fully preserve pose and expression of the target face as expected; 2) most of the existing models fail to accomplish high-quality face swapping on high-resolution images. To address these two challenges, we first propose a Cross- Window Face Encoder based on Swin Transformer that learns rich facial features including poses and expressions. Then, we devise an Identity Generator to reconstruct high-resolution images of specific identity with high quality while utilizing the Transformer attention mechanism to increase identity information retention. Finally, a Face Conversion Module is proposed to transform the source identity reconstructed image into the target face image to synthesize the final face swapping result while maintaining the details of pose and expression of the target face. Through extensive experiments, our method not only accomplishes face swapping for low-resolution images with arbitrary identities, but also accomplishes face swapping for high-resolution images. Furthermore, our method achieves the state-of-the-art performance in pose and expression controls compared to other methods.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125974995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
LipNeRF: What is the right feature space to lip-sync a NeRF? LipNeRF: NeRF对口型的正确特征空间是什么?
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2023-01-05 DOI: 10.1109/FG57933.2023.10042567
Aggelina Chatziagapi, ShahRukh Athar, Abhinav Jain, M. Rohith, Vimal Bhat, D. Samaras
{"title":"LipNeRF: What is the right feature space to lip-sync a NeRF?","authors":"Aggelina Chatziagapi, ShahRukh Athar, Abhinav Jain, M. Rohith, Vimal Bhat, D. Samaras","doi":"10.1109/FG57933.2023.10042567","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042567","url":null,"abstract":"Synthesizing high-fidelity talking head videos of an arbitrary identity, lip-synced to a target speech segment, is a challenging problem. Recent GAN-based methods succeed by training a model on a large amount of videos, allowing the generator to learn a variety of audio-lip representations. However, they are unable to handle head pose changes. On the other hand, Neural Radiance Fields (NeRFs) model the 3D face geometry more accurately. Current audio-conditioned NeRFs are not as good in lip synchronization as GANs, since they are trained on limited video data of a single identity. In this work, we propose LipNeRF, a lip-syncing NeRF that bridges the gap between the accurate lip synchronization of GAN-based methods and the accurate 3D face modeling of NeRFs. LipNeRF is conditioned on the expression space of a 3DMM, instead of the audio feature space. We experimentally demonstrate that the expression space gives a better representation for the lip shape than the audio feature space. LipNeRF shows a significant improvement in lip-sync quality over the current state-of-the-art, especially in high-definition videos of cinematic content, with challenging pose, illumination and expression variations.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116504321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
New Insights on Weight Estimation from Face Images 基于人脸图像的权重估计新见解
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2023-01-05 DOI: 10.1109/FG57933.2023.10042568
Nelida Mirabet Herranz, Khawla Mallat, J. Dugelay
{"title":"New Insights on Weight Estimation from Face Images","authors":"Nelida Mirabet Herranz, Khawla Mallat, J. Dugelay","doi":"10.1109/FG57933.2023.10042568","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042568","url":null,"abstract":"Weight is a soft biometric trait which estimation is useful in numerous health related applications such as remote estimation from a health professional or at-home daily monitoring. In scenarios when a scale is unavailable or the subject is unable to cooperate, i.e. road accidents, estimating a person's weight from face appearance allows for a contactless measurement. In this article, we define an optimal transfer learning protocol for a ResNet50 architecture obtaining better performances than the state-of-the-art thus moving one step forward in closing the gap between remote weight estimation and physical devices. We also demonstrate that gender-splitting, image cropping and hair occlusion play an important role in weight estimation which might not necessarily be the case in face recognition. We use up-to-date explainability tools to illustrate and validate our assumptions. We conduct extensive simulations on the most popular publicly available face dataset annotated by weight to ensure a fair comparison with other approaches and we aim to overcome its flaws by presenting our self-collected database composed of 400 new images.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130465144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Signing Avatars - Multimodal Challenges for Text-to-sign Generation 签名头像-文本到签名生成的多模式挑战
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2023-01-05 DOI: 10.1109/FG57933.2023.10042759
S. Gibet, P. Marteau
{"title":"Signing Avatars - Multimodal Challenges for Text-to-sign Generation","authors":"S. Gibet, P. Marteau","doi":"10.1109/FG57933.2023.10042759","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042759","url":null,"abstract":"This paper is a positional paper that surveys existing technologies for animating signing avatars from written language. The main grammatical mechanisms of sign languages are described, and in particular the sign inflecting mechanisms in light of the processes of spatialization and iconicity that characterize these visual-gestural languages. The challenges faced by sign language generation systems using signing avatars are then outlined, as well as unresolved issues in building text-to-sign generation systems.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134088555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PhygitalNet: Unified Face Presentation Attack Detection via One-Class Isolation Learning PhygitalNet:基于单类隔离学习的统一人脸表示攻击检测
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2023-01-05 DOI: 10.1109/FG57933.2023.10042797
K. Thakral, S. Mittal, Mayank Vatsa, Richa Singh
{"title":"PhygitalNet: Unified Face Presentation Attack Detection via One-Class Isolation Learning","authors":"K. Thakral, S. Mittal, Mayank Vatsa, Richa Singh","doi":"10.1109/FG57933.2023.10042797","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042797","url":null,"abstract":"Face biometric systems are shown to be vulnerable to various kinds of presentation attacks including physical and digital attacks. Existing research generally focuses on individual attacks and very few focus on generalizability across digital and physical attacks. In this research, we propose PhygitalNet model that generalizes to both physical and digital presentation attacks on face biometric systems. The proposed model is based on novel one-class iSOLatiOn Learning (SOLO Learning) which is a two-step training process aimed at reducing of the covariate shift between the bonafide samples of the physical as well as digital attack dataset in the pre-training step. In the downstream step, the algorithm introduces a novel single-class iSOLatiOn loss (SOLO loss) function that isolates the samples belonging to the bonafide class away from the samples of the attacked class for both the attack methods. Experimental results show that PhygitalNet achieves a significant performance gain when compared with the baseline techniques, evaluated on a combination of MLFP, MSU-MFSD dataset (for physical attack) and FaceForensics++ (for digital attack) datasets.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122494985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
RULe: Relocalization-Uniformization-Landmark Estimation Network for Real-Time Face Alignment in Degraded Conditions 规则:用于退化条件下实时人脸对齐的重新定位-均匀化-地标估计网络
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2023-01-05 DOI: 10.1109/FG57933.2023.10042577
Arnaud Dapogny, Gauthier Tallec, Jules Bonnard, Edouard Yvinec, Kévin Bailly
{"title":"RULe: Relocalization-Uniformization-Landmark Estimation Network for Real-Time Face Alignment in Degraded Conditions","authors":"Arnaud Dapogny, Gauthier Tallec, Jules Bonnard, Edouard Yvinec, Kévin Bailly","doi":"10.1109/FG57933.2023.10042577","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042577","url":null,"abstract":"Face alignment refers to the process of estimating the position of a number of salient landmarks on face images or videos, such as mouth and eye corners, nose tip, etc. With the availability of large annotated databases and the rise of deep learning-based methods, face alignment as a domain has matured to a point where it can be applied in more or less unconstrained conditions, e.g. non-frontal head poses, presence of heavy make-up or partial occlusions. However, when considering real-case alignment on videos with possibly low frame rates, we need to make sure that the algorithms are robust to jittering of the face bounding box localization, low-resolution of the face crops, possible bad environmental lighting, brightness, and presence of noise. To tackle these issues, we propose RULe, a three-staged Relocalization-Uniformization-Landmark Estimation network. In the first stage, an initial loosely localized bounding box gets refined to output a well centered face crop, thus reducing the variability of the images prior to passing them to the subsequent stage. Then, in the second stage, the face style is uniformized (using adversarial learning as well as perceptual losses) to correct low resolution or variations of brightness/contrast. Finally, the third stage outputs a precise landmark estimation given such enhanced face crop using a cascaded compact model trained using hint-based knowledge distillation. We show through a variety of experiments that RULe achieves real-time face alignment with state-of-the-art precision in heavily degraded conditions.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114900823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Transformer for Physical Rehabilitation Evaluation 用于物理康复评估的图形转换器
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2023-01-05 DOI: 10.1109/FG57933.2023.10042778
Kévin Réby, Idris Dulau, Guillaume Dubrasquet, M. Beurton-Aimar
{"title":"Graph Transformer for Physical Rehabilitation Evaluation","authors":"Kévin Réby, Idris Dulau, Guillaume Dubrasquet, M. Beurton-Aimar","doi":"10.1109/FG57933.2023.10042778","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042778","url":null,"abstract":"Physical rehabilitation is a medical specialty that focuses on the restoration of body functions in the safest and most effective way possible. During a rehabilitation exercise session, patients' behavior reflects their health status and is an important indicator of the treatment outcome. Building an automatic system for the evaluation of human motion quality in an objective and reliable way can be used in medicine to establish a differential diagnosis, to choose the adequate treatment, or for patient monitoring. Deep learning has become the state-of-the-art in Human Action and Human Behavior Recognition from videos. Most of the state-of-the-art model architectures are CNNs based and often use RNNs to calculate temporal dependencies among frames, and ignore the topological structure of the human body. In this work, we propose to use a Graph Transformer network with spatial and temporal attention mechanisms for physical rehabilitation evaluation. First, we used a standard Transformer network with a selfattention mechanism, then we take advantage of graph skeletons as inputs for a two-stream spatio-temporal graph network with both spatial and temporal attention mechanisms. We used our model on UI-PRMD, a benchmark dataset that provides skeleton data using motion capture systems. Our results show that our attention-based based ST-GCN models outperform the state-of-the-art methods on quality score prediction and binary classification.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115752393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信