2020 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Anomaly Detection-Based Unknown Face Presentation Attack Detection 基于异常检测的未知人脸表示攻击检测
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-07-11 DOI: 10.1109/IJCB48548.2020.9304935
Yashasvi Baweja, Poojan Oza, Pramuditha Perera, Vishal M. Patel
{"title":"Anomaly Detection-Based Unknown Face Presentation Attack Detection","authors":"Yashasvi Baweja, Poojan Oza, Pramuditha Perera, Vishal M. Patel","doi":"10.1109/IJCB48548.2020.9304935","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304935","url":null,"abstract":"Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection (fPAD), where a spoof detector is learned using only non-attacked images of users. These detectors are of practical importance as they are shown to generalize well to new attack types. In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection where both classifier and feature representations are learned together end-to-end. First, we introduce a pseudo-negative class during training in the absence of attacked images. The pseudo-negative class is modeled using a Gaussian distribution whose mean is calculated by a weighted running mean. Secondly, we use pairwise confusion loss to further regularize the training process. The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task as shown in our ablation study. We perform extensive experiments on four publicly available datasets: Replay-Attack, Rose-Youtu, OULU-NPU and Spoof in Wild to show the effectiveness of the proposed approach over the previous methods. Code is available at: https://github.com/yashasvi97/IJCB2020_anomaly","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121551674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
On the Influence of Ageing on Face Morph Attacks: Vulnerability and Detection 衰老对面部变形攻击的影响:脆弱性与检测
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-07-06 DOI: 10.1109/IJCB48548.2020.9304856
S. Venkatesh, K. Raja, Raghavendra Ramachandra, C. Busch
{"title":"On the Influence of Ageing on Face Morph Attacks: Vulnerability and Detection","authors":"S. Venkatesh, K. Raja, Raghavendra Ramachandra, C. Busch","doi":"10.1109/IJCB48548.2020.9304856","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304856","url":null,"abstract":"Face morphing attacks have raised critical concerns as they demonstrate a new vulnerability of Face Recognition Systems (FRS), which are widely deployed in border control applications. The face morphing process uses the images from multiple data subjects and performs an image blending operation to generate a morphed image of high quality. The generated morphed image exhibits similar visual characteristics corresponding to the biometric characteristics of the data subjects that contributed to the composite image and thus making it difficult for both humans and FRS, to detect such attacks. In this paper, we report a systematic investigation on the vulnerability of the Commercial-Off- The-Shelf (COTS) FRS when morphed images under the influence of ageing are presented. To this extent, we have introduced a new morphed face dataset with ageing derived from the publicly available MORPH II face dataset, which we refer to as MorphAge dataset. The dataset has two bins based on age intervals, the first bin - MorphAge-I dataset has 1002 unique data subjects with the age variation of 1 year to 2 years while the MorphAge-II dataset consists of 516 data subjects whose age intervals are from 2 years to 5 years. To effectively evaluate the vulnerability for morphing attacks, we also introduce a new evaluation metric, namely the Fully Mated Morphed Presentation Match Rate (FMMPMR), to quantify the vulnerability effectively in a realistic scenario. Extensive experiments are carried out using two different COTS FRS (COTS I Cognitec FaceVACS-SDK Version 9.4.2 and COTS II - Neurotechnology version 10.0) to quantify the vulnerability with ageing. Further, we also evaluate five different Morph Attack Detection (MAD) techniques to benchmark their detection performance with respect to ageing.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114911489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
D-NetPAD: An Explainable and Interpretable Iris Presentation Attack Detector D-NetPAD:一个可解释和可解释的虹膜表示攻击检测器
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-07-02 DOI: 10.1109/IJCB48548.2020.9304880
Renu Sharma, A. Ross
{"title":"D-NetPAD: An Explainable and Interpretable Iris Presentation Attack Detector","authors":"Renu Sharma, A. Ross","doi":"10.1109/IJCB48548.2020.9304880","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304880","url":null,"abstract":"An iris recognition system is vulnerable to presentation attacks, or PAs, where an adversary presents artifacts such as printed eyes, plastic eyes, or cosmetic contact lenses to circumvent the system. In this work, we propose an effective and robust iris PA detector called D-NetPAD based on the DenseNet convolutional neural network architecture. It demonstrates generalizability across PA artifacts, sensors and datasets. Experiments conducted on a proprietary dataset and a publicly available dataset (LivDet-2017) substantiate the effectiveness of the proposed method for iris PA detection. The proposed method results in a true detection rate of 98.58% at a false detection rate of 0.2% on the proprietary dataset and outperforms state-of-the-art methods on the LivDet-2017 dataset. We visualize intermediate feature distributions and fixation heatmaps using t-SNE plots and Grad-CAM, respectively, in order to explain the performance of D-NetPAD. Further, we conduct a frequency analysis to explain the nature of features being extracted by the network. The source code and trained model are available at https://github.com/iPRoBe-lab/D-NetPAD.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130824646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Generating Master Faces for Use in Performing Wolf Attacks on Face Recognition Systems 生成主脸用于执行狼攻击的人脸识别系统
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-06-15 DOI: 10.1109/IJCB48548.2020.9304893
H. Nguyen, J. Yamagishi, I. Echizen, S. Marcel
{"title":"Generating Master Faces for Use in Performing Wolf Attacks on Face Recognition Systems","authors":"H. Nguyen, J. Yamagishi, I. Echizen, S. Marcel","doi":"10.1109/IJCB48548.2020.9304893","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304893","url":null,"abstract":"Due to its convenience, biometric authentication, especial face authentication, has become increasingly mainstream and thus is now a prime target for attackers. Presentation attacks and face morphing are typical types of attack. Previous research has shown that finger- vein- and fingerprint-based authentication methods are susceptible to wolf attacks, in which a wolf sample matches many enrolled user templates. In this work, we demonstrated that wolf (generic) faces, which we call “master faces,” can also compromise face recognition systems and that the master face concept can be generalized in some cases. Motivated by recent similar work in the fingerprint domain, we generated high-quality master faces by using the state-of-the-art face generator StyleGAN in a process called latent variable evolution. Experiments demonstrated that even attackers with limited resources using only pre-trained models available on the Internet can initiate master face attacks. The results, in addition to demonstrating performance from the attacker's point of view, can also be used to clarify and improve the performance of face recognition systems and harden face authentication systems.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128532988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Occlusion-Adaptive Deep Network for Robust Facial Expression Recognition 鲁棒面部表情识别的闭塞自适应深度网络
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-05-12 DOI: 10.1109/IJCB48548.2020.9304923
Hui Ding, Peng Zhou, R. Chellappa
{"title":"Occlusion-Adaptive Deep Network for Robust Facial Expression Recognition","authors":"Hui Ding, Peng Zhou, R. Chellappa","doi":"10.1109/IJCB48548.2020.9304923","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304923","url":null,"abstract":"Recognizing the expressions of partially occluded faces is a challenging computer vision problem. Previous expression recognition methods, either overlooked this issue or resolved it using unrealistic assumptions. Motivated by the fact that the human visual system is adept at ignoring the occlusions and focus on non-occluded facial areas, we propose a landmark-guided attention branch to find and discard corrupted features from occluded regions so that they are not used for recognition. An attention map is first generated to indicate if a specific facial part is occluded and guide our model to attend to non-occluded regions. To further improve robustness, we propose a facial region branch to partition the feature maps into non-overlapping facial blocks and task each block to predict the expression independently. This results in more diverse and discriminative features, enabling the expression recognition system to re-cover even though the face is partially occluded. Depending on the synergistic effects of the two branches, our occlusion-adaptive deep network significantly outperforms state-of-the-art methods on two challenging in-the-wild benchmark datasets and three real-world occluded expression datasets.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116966405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
PF -cpGAN: Profile to Frontal Coupled GAN for Face Recognition in the Wild PF -cpGAN:用于野外人脸识别的侧面到正面耦合GAN
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-04-25 DOI: 10.1109/IJCB48548.2020.9304872
Fariborz Taherkhani, Veeru Talreja, J. Dawson, M. Valenti, N. Nasrabadi
{"title":"PF -cpGAN: Profile to Frontal Coupled GAN for Face Recognition in the Wild","authors":"Fariborz Taherkhani, Veeru Talreja, J. Dawson, M. Valenti, N. Nasrabadi","doi":"10.1109/IJCB48548.2020.9304872","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304872","url":null,"abstract":"In recent years, due to the emergence of deep learning, face recognition has achieved exceptional success. However, many of these deep face recognition models perform relatively poorly in handling profile faces compared to frontal faces. The major reason for this poor performance is that it is inherently difficult to learn large pose invariant deep representations that are useful for profile face recognition. In this paper, we hypothesize that the profile face domain possesses a gradual connection with the frontal face domain in the deep feature space. We look to exploit this connection by projecting the profile faces and frontal faces into a common latent space and perform verification or retrieval in the latent domain. We leverage a coupled generative adversarial network (cpGAN) structure to find the hidden relationship between the profile and frontal images in a latent common embedding subspace. Specifically, the cp-GAN framework consists of two GAN-based sub-networks, one dedicated to the frontal domain and the other dedicated to the profile domain. Each sub-network tends to find a projection that maximizes the pair-wise correlation between two feature domains in a common embedding feature subspace. The efficacy of our approach compared with the state-of-the-art is demonstrated using the CFp, CMU Multi-PIE, IJB-A, and IJB-C datasets.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116930808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
TypeNet: Scaling up Keystroke Biometrics TypeNet:扩展击键生物识别
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-04-07 DOI: 10.1109/IJCB48548.2020.9304908
A. Acien, John V. Monaco, A. Morales, R. Vera-Rodríguez, Julian Fierrez
{"title":"TypeNet: Scaling up Keystroke Biometrics","authors":"A. Acien, John V. Monaco, A. Morales, R. Vera-Rodríguez, Julian Fierrez","doi":"10.1109/IJCB48548.2020.9304908","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304908","url":null,"abstract":"We study the suitability of keystroke dynamics to authenticate 100 K users typing free-text. For this, we first analyze to what extent our method based on a Siamese Recurrent Neural Network (RNN) is able to authenticate users when the amount of data per user is scarce, a common scenario in free-text keystroke authentication. With 1 K users for testing the network, a population size comparable to previous works, TypeNet obtains an equal error rate of 4.8% using only 5 enrollment sequences and 1 test sequence per user with 50 keystrokes per sequence. Using the same amount of data per user, as the number of test users is scaled up to 100K, the performance in comparison to 1 K decays relatively by less than 5%, demonstrating the potential of Type-Net to scale well at large scale number of users. Our experiments are conducted with the Aalto University keystroke database. To the best of our knowledge, this is the largest free-text keystroke database captured with more than 136M keystrokes from 168K users.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124228295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Fingerprint Presentation Attack Detection: A Sensor and Material Agnostic Approach 指纹呈现攻击检测:一种传感器和材料不可知方法
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-04-06 DOI: 10.1109/IJCB48548.2020.9304863
Steven A. Grosz, T. Chugh, Anil K. Jain
{"title":"Fingerprint Presentation Attack Detection: A Sensor and Material Agnostic Approach","authors":"Steven A. Grosz, T. Chugh, Anil K. Jain","doi":"10.1109/IJCB48548.2020.9304863","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304863","url":null,"abstract":"The vulnerability of automated fingerprint recognition systems to presentation attacks (PAs), i.e., spoof or altered fingers, has been a growing concern, warranting the development of accurate and efficient presentation attack detection (PAD) methods. However, one major limitation of the existing PAD solutions is their poor generalization to new PA materials and fingerprint sensors, not used in training. In this study, we propose a robust PAD solution with improved cross-material and cross-sensor generalization. Specifically, we build on top of any CNN-based architecture trained for fingerprint spoof detection combined with cross-material spoof generalization using a style transfer network wrapper. We also incorporate adversarial representation learning (ARL) in deep neural networks (DNN) to learn sensor and material invariant representations for PAD. Experimental results on LivDet 2015 and 2017 public domain datasets exhibit the effectiveness of the proposed approach.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132774101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Face Quality Estimation and Its Correlation to Demographic and Non-Demographic Bias in Face Recognition 人脸识别中人脸质量估计及其与人口统计学和非人口统计学偏差的关系
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-04-02 DOI: 10.1109/IJCB48548.2020.9304865
P. Terhorst, J. Kolf, N. Damer, Florian Kirchbuchner, Arjan Kuijper
{"title":"Face Quality Estimation and Its Correlation to Demographic and Non-Demographic Bias in Face Recognition","authors":"P. Terhorst, J. Kolf, N. Damer, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/IJCB48548.2020.9304865","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304865","url":null,"abstract":"Face quality assessment aims at estimating the utility of a face image for the purpose of recognition. It is a key factor to achieve high face recognition performances. Currently, the high performance of these face recognition systems come with the cost of a strong bias against demographic and non-demographic sub-groups. Recent work has shown that face quality assessment algorithms should adapt to the deployed face recognition system, in order to achieve highly accurate and robust quality estimations. However, this could lead to a bias transfer towards the face quality assessment leading to discriminatory effects e.g. during enrolment. In this work, we present an in-depth analysis of the correlation between bias in face recognition and face quality assessment. Experiments were conducted on two publicly available datasets captured under controlled and uncontrolled circumstances with two popular face embed-dings. We evaluated four state-of-the-art solutions for face quality assessment towards biases to pose, ethnicity, and age. The experiments showed that the face quality assessment solutions assign significantly lower quality values towards subgroups affected by the recognition bias demonstrating that these approaches are biased as well. This raises ethical questions towards fairness and discrimination which future works have to address.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116824051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Are Gabor Kernels Optimal for Iris Recognition? Gabor核是虹膜识别的最佳选择吗?
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-02-20 DOI: 10.1109/IJCB48548.2020.9304939
Aidan Boyd, A. Czajka, K. Bowyer
{"title":"Are Gabor Kernels Optimal for Iris Recognition?","authors":"Aidan Boyd, A. Czajka, K. Bowyer","doi":"10.1109/IJCB48548.2020.9304939","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304939","url":null,"abstract":"Gabor kernels are widely accepted as dominant filters for iris recognition. In this work we investigate, given the current interest in neural networks, if Gabor kernels are the only family of functions performing best in iris recognition, or if better filters can be learned directly from iris data. We use (on purpose) a single-layer convolutional neural network as it mimics an iris code-based algorithm. We learn two sets of data-driven kernels; one starting from randomly initialized weights and the other from open-source set of Gabor kernels. Through experimentation, we show that the network does not converge on Gabor kernels, instead converging on a mix of edge detectors, blob detectors and simple waves. In our experiments carried out with three subject-disjoint datasets we found that the performance of these learned kernels is comparable to the open-source Gabor kernels. These lead us to two conclusions: (a) a family of functions offering optimal performance in iris recognition is wider than Gabor kernels, and (b) we probably hit the maximum performance for an iris coding algorithm that uses a single convolutional layer, yet with multiple filters. Released with this work is a framework to learn data-driven kernels that can be easily transplanted into open-source iris recognition software (for instance, OSIRIS - Open Source IRIS).","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116477371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信