2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)最新文献

筛选
英文 中文
CoNFies: Controllable Neural Face Avatars CoNFies:可控制的神经面部化身
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-11-16 DOI: 10.1109/FG57933.2023.10042763
Heng Yu, Koichiro Niinuma, László A. Jeni
{"title":"CoNFies: Controllable Neural Face Avatars","authors":"Heng Yu, Koichiro Niinuma, László A. Jeni","doi":"10.1109/FG57933.2023.10042763","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042763","url":null,"abstract":"Neural Radiance Fields (NeRF) are compelling techniques for modeling dynamic 3D scenes from 2D image collections. These volumetric representations would be well suited for synthesizing novel facial expressions but for two problems. First, deformable NeRFs are object agnostic and model holistic movement of the scene: they can replay how the motion changes over time, but they cannot alter it in an interpretable way. Second, controllable volumetric representations typically require either time-consuming manual annotations or 3D supervision to provide semantic meaning to the scene. We propose a controllable neural representation for face self-portraits (CoNFies), that solves both of these problems within a common framework, and it can rely on automated processing. We use automated facial action recognition (AFAR) to characterize facial expressions as a combination of action units (AU) and their intensities. AUs provide both the semantic locations and control labels for the system. CoNFies outperformed competing methods for novel view and expression synthesis in terms of visual and anatomic fidelity of expressions.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":" 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132187505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Unsupervised Face Recognition using Unlabeled Synthetic Data 使用未标记合成数据的无监督人脸识别
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-11-14 DOI: 10.1109/FG57933.2023.10042627
F. Boutros, Marcel Klemt, Meiling Fang, Arjan Kuijper, N. Damer
{"title":"Unsupervised Face Recognition using Unlabeled Synthetic Data","authors":"F. Boutros, Marcel Klemt, Meiling Fang, Arjan Kuijper, N. Damer","doi":"10.1109/FG57933.2023.10042627","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042627","url":null,"abstract":"Over the past years, the main research innovations in face recognition focused on training deep neural networks on large-scale identity-labeled datasets using variations of multi-class classification losses. However, many of these datasets are retreated by their creators due to increased privacy and ethical concerns. Very recently, privacy-friendly synthetic data has been proposed as an alternative to privacy-sensitive authentic data to comply with privacy regulations and to ensure the continuity of face recognition research. In this paper, we propose an unsupervised face recognition model based on unlabeled synthetic data (USynthFace). Our proposed USynthFace learns to maximize the similarity between two augmented images of the same synthetic instance. We enable this by a large set of geometric and color transformations in addition to GAN-based augmentation that contributes to the USynthFace model training. We also conduct numerous empirical studies on different components of our USynthFace. With the proposed set of augmentation operations, we proved the effectiveness of our USynthFace in achieving relatively high recognition accuracies using unlabeled synthetic data. The training code and pretrained model are publicly available under https://github.com/fdbtrs/Unsupervised-Face-Recognition-using-Unlabeled-Synthetic-Data.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124095105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Pain Detection in Masked Faces during Procedural Sedation 手术镇静过程中蒙面疼痛检测
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-11-12 DOI: 10.1109/FG57933.2023.10042502
Y. Zarghami, S. Mafeld, A. Conway, B. Taati
{"title":"Pain Detection in Masked Faces during Procedural Sedation","authors":"Y. Zarghami, S. Mafeld, A. Conway, B. Taati","doi":"10.1109/FG57933.2023.10042502","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042502","url":null,"abstract":"Pain monitoring is essential to the quality of care for patients undergoing a medical procedure with sedation. An automated mechanism for detecting pain could improve sedation dose titration. Previous studies on facial pain detection have shown the viability of computer vision methods in detecting pain in unoccluded faces. However, the faces of patients undergoing procedures are often partially occluded by medical devices and face masks. A previous preliminary study on pain detection on artificially occluded faces has shown a feasible approach to detect pain from a narrow band around the eyes. This study has collected video data from masked faces of 14 patients undergoing procedures in an interventional radiology department and has trained a deep learning model using this dataset. The model was able to detect expressions of pain accurately and, after causal temporal smoothing, achieved an average precision (AP) of 0.72 and an area under the receiver operating characteristic curve (AVC) of 0.82. These results outperform baseline models and show viability of computer vision approaches for pain detection of masked faces during procedural sedation. Cross-dataset performance is also examined when a model is trained on a publicly available dataset and tested on the sedation videos. The ways in which pain expressions differ in the two datasets are qualitatively examined.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132940608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Are Face Detection Models Biased? 人脸检测模型有偏见吗?
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-11-07 DOI: 10.1109/FG57933.2023.10042564
S. Mittal, K. Thakral, P. Majumdar, Mayank Vatsa, Richa Singh
{"title":"Are Face Detection Models Biased?","authors":"S. Mittal, K. Thakral, P. Majumdar, Mayank Vatsa, Richa Singh","doi":"10.1109/FG57933.2023.10042564","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042564","url":null,"abstract":"The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Research in bias focuses primarily on facial recognition and attribute prediction with scarce emphasis on face detection. Existing studies consider face detection as binary classification into ‘face’ and ‘non-face’ classes. In this work, we investigate possible bias in the domain of face detection through facial region localization which is currently unexplored. Since facial region localization is an essential task for all face recognition pipelines, it is imperative to analyze the presence of such bias in popular deep models. Most existing face detection datasets lack suitable annotation for such analysis. Therefore, we web-curate the Fair Face Localization with Attributes (F2LA) dataset and manually annotate more than 10 attributes per face, including facial localization information. Utilizing the extensive annotations from F2LA, an experimental setup is designed to study the performance of four pre-trained face detectors. We observe (i) a high disparity in detection accuracies across gender and skin-tone, and (ii) interplay of confounding factors beyond demography. The F2LA data and associated annotations can be accessed at http://iab-rubric.org/index.php/F2LA.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123692660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Florence 4D Facial Expression Dataset 佛罗伦萨4D面部表情数据集
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-10-30 DOI: 10.1109/FG57933.2023.10042606
F. Principi, S. Berretti, C. Ferrari, N. Otberdout, M. Daoudi, A. Bimbo
{"title":"The Florence 4D Facial Expression Dataset","authors":"F. Principi, S. Berretti, C. Ferrari, N. Otberdout, M. Daoudi, A. Bimbo","doi":"10.1109/FG57933.2023.10042606","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042606","url":null,"abstract":"Human facial expressions change dynamically, so their recognition / analysis should be conducted by accounting for the temporal evolution of face deformations either in 2D or 3D. While abundant 2D video data do exist, this is not the case in 3D, where few 3D dynamic (4D) datasets were released for public use. The negative consequence of this scarcity of data is amplified by current deep learning based-methods for facial expression analysis that require large quantities of variegate samples to be effectively trained. With the aim of smoothing such limitations, in this paper we propose a large dataset, named Florence 4D, composed of dynamic sequences of 3D face models, where a combination of synthetic and real identities exhibit an unprecedented variety of 4D facial expressions, with variations that include the classical neutral-apex transition, but generalize to expression-to-expression. All these characteristics are not exposed by any of the existing 4D datasets and they cannot even be obtained by combining more than one dataset. We strongly believe that making such a data corpora publicly available to the community will allow designing and experimenting new applications that were not possible to investigate till now. To show at some extent the difficulty of our data in terms of different identities and varying expressions, we also report a baseline experimentation on the proposed dataset that can be used as baseline.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123591720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic Assessment of Infant Face and Upper-Body Symmetry as Early Signs of Torticollis 婴儿面部和上半身对称作为斜颈早期症状的自动评估
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-10-26 DOI: 10.1109/FG57933.2023.10042719
Michael Wan, Xiaofei Huang, Bethany Tunik, S. Ostadabbas
{"title":"Automatic Assessment of Infant Face and Upper-Body Symmetry as Early Signs of Torticollis","authors":"Michael Wan, Xiaofei Huang, Bethany Tunik, S. Ostadabbas","doi":"10.1109/FG57933.2023.10042719","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042719","url":null,"abstract":"We apply computer vision pose estimation techniques developed expressly for the data-scarce infant domain to the study of torticollis, a common condition in infants for which early identification and treatment is critical. Specifically, we use a combination of facial landmark and body joint estimation techniques designed for infants to estimate a range of geometric measures pertaining to face and upper body symmetry, drawn from an array of sources in the physical therapy and ophthal-mology research literature in torticollis. We gauge performance with a range of metrics and show that the estimates of most these geometric measures are successful, yielding strong to very strong Spearman's $p$ correlation with ground truth values. Furthermore, we show that these estimates, derived from pose estimation neural networks designed for the infant domain, cleanly outperform estimates derived from more widely known networks designed for the adult domain11Code and data available at https://github.com/ostadabbas/Infant-Upper-Body-Postural-Symmetry..","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"53 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114124109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SS-VAERR: Self-Supervised Apparent Emotional Reaction Recognition from Video 基于视频的自我监督表观情绪反应识别
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-10-20 DOI: 10.1109/FG57933.2023.10042638
Marija Jegorova, Stavros Petridis, M. Pantic
{"title":"SS-VAERR: Self-Supervised Apparent Emotional Reaction Recognition from Video","authors":"Marija Jegorova, Stavros Petridis, M. Pantic","doi":"10.1109/FG57933.2023.10042638","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042638","url":null,"abstract":"This work focuses on the apparent emotional reaction recognition (AERR) from the video-only input, conducted in a self-supervised fashion. The network is first pre-trained on different self-supervised pretext tasks and later fine-tuned on the downstream target task. Self-supervised learning facilitates the use of pre-trained architectures and larger datasets that might be deemed unfit for the target task and yet might be useful to learn informative representations and hence provide useful initializations for further fine-tuning on smaller more suitable data. Our presented contribution is two-fold: (1) an analysis of different state-of-the-art (SOTA) pretext tasks for the video-only apparent emotional reaction recognition architecture, and (2) an analysis of various combinations of the regression and classification losses that are likely to improve the performance further. Together these two contributions result in the current state-of-the-art performance for the video-only spontaneous apparent emotional reaction recognition with continuous annotations.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132531783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ArabSign: A Multi-modality Dataset and Benchmark for Continuous Arabic Sign Language Recognition 阿拉伯手语:连续阿拉伯手语识别的多模态数据集和基准
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-10-08 DOI: 10.1109/FG57933.2023.10042720
H. Luqman
{"title":"ArabSign: A Multi-modality Dataset and Benchmark for Continuous Arabic Sign Language Recognition","authors":"H. Luqman","doi":"10.1109/FG57933.2023.10042720","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042720","url":null,"abstract":"Sign language recognition has attracted the interest of researchers in recent years. While numerous approaches have been proposed for European and Asian sign languages recognition, very limited attempts have been made to develop similar systems for the Arabic sign language (ArSL). This can be attributed partly to the lack of a dataset at the sentence level. In this paper, we aim to make a significant contribution by proposing ArabSign, a continuous ArSL dataset. The proposed dataset consists of 9,335 samples performed by 6 signers. The total time of the recorded sentences is around 10 hours and the average sentence's length is 3.1 signs. ArabSign dataset was recorded using a Kinect V2 camera that provides three types of information (color, depth, and skeleton joint points) recorded simultaneously for each sentence. In addition, we provide the annotation of the dataset according to ArSL and Arabic language structures that can help in studying the linguistic characteristics of ArSL. To benchmark this dataset, we propose an encoder-decoder model for Continuous ArSL recognition. The model has been evaluated on the proposed dataset, and the obtained results show that the encoder-decoder model outperformed the attention mechanism with an average word error rate (WER) of 0.50 compared with 0.62 with the attention mechanism. The data and code are available at https://github.com/Hamzah-Luqman/rabSign","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124864668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-Modal Human Authentication Using Silhouettes, Gait and RGB 基于轮廓、步态和RGB的多模态人体身份验证
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-10-08 DOI: 10.1109/FG57933.2023.10042572
Yuxiang Guo, Cheng Peng, Chun Pong Lau, R. Chellappa
{"title":"Multi-Modal Human Authentication Using Silhouettes, Gait and RGB","authors":"Yuxiang Guo, Cheng Peng, Chun Pong Lau, R. Chellappa","doi":"10.1109/FG57933.2023.10042572","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042572","url":null,"abstract":"Whole-body-based human authentication is a promising approach for remote biometrics scenarios. Current literature focuses on either body recognition based on RGB images or gait recognition based on body shapes and walking patterns; both have their advantages and drawbacks. In this work, we propose Dual-Modal Ensemble (DME), which combines both RGB and silhouette data to achieve more robust performances for indoor and outdoor whole-body based recognition. Within DME, we propose GaitPattern, which is inspired by the double helical gait pattern used in traditional gait analysis. The GaitPattern contributes to robust identification performance over a large range of viewing angles. Extensive experimental results on the CASIA-B dataset demonstrate that the proposed method outperforms state-of-the-art recognition systems. We also provide experimental results using the newly collected BRIAR dataset.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115701327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
StyleMask: Disentangling the Style Space of StyleGAN2 for Neural Face Reenactment StyleMask:用于神经人脸再现的StyleGAN2的风格空间解纠结
2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG) Pub Date : 2022-09-27 DOI: 10.1109/FG57933.2023.10042744
Stella Bounareli, Christos Tzelepis, V. Argyriou, I. Patras, Georgios Tzimiropoulos
{"title":"StyleMask: Disentangling the Style Space of StyleGAN2 for Neural Face Reenactment","authors":"Stella Bounareli, Christos Tzelepis, V. Argyriou, I. Patras, Georgios Tzimiropoulos","doi":"10.1109/FG57933.2023.10042744","DOIUrl":"https://doi.org/10.1109/FG57933.2023.10042744","url":null,"abstract":"In this paper we address the problem of neural face reenactment, where, given a pair of a source and a target facial image, we need to transfer the target's pose (defined as the head pose and its facial expressions) to the source image, by preserving at the same time the source's identity characteristics (e.g., facial shape, hair style, etc), even in the challenging case where the source and the target faces belong to different identities. In doing so, we address some of the limitations of the state-of-the-art works, namely, a) that they depend on paired training data (i.e., source and target faces have the same identity), b) that they rely on labeled data during inference, and c) that they do not preserve identity in large head pose changes. More specifically, we propose a framework that, using unpaired randomly generated facial images, learns to disentangle the identity characteristics of the face from its pose by incorporating the recently introduced style space S [1] of StyleGAN2 [2], a latent representation space that exhibits remarkable disentanglement properties. By capitalizing on this, we learn to successfully mix a pair of source and target style codes using supervision from a 3D model. The resulting latent code, that is subsequently used for reenactment, consists of latent units corresponding to the facial pose of the target only and of units corresponding to the identity of the source only, leading to notable improvement in the reenactment performance compared to recent state-of-the-art methods. In comparison to state of the art, we quantitatively and qualitatively show that the proposed method produces higher quality results even on extreme pose variations. Finally, we report results on real images by first embedding them on the latent space of the pretrained generator. We make the code and the pretrained models publicly available at: https://github.com/StelaBou/StyleMask.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131759885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信