2021 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Iris Presentation Attack Detection by Attention-based and Deep Pixel-wise Binary Supervision Network 基于注意力和深度逐像素二值监督网络的虹膜表示攻击检测
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-06-28 DOI: 10.1109/IJCB52358.2021.9484343
Meiling Fang, N. Damer, F. Boutros, Florian Kirchbuchner, Arjan Kuijper
{"title":"Iris Presentation Attack Detection by Attention-based and Deep Pixel-wise Binary Supervision Network","authors":"Meiling Fang, N. Damer, F. Boutros, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/IJCB52358.2021.9484343","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484343","url":null,"abstract":"Iris presentation attack detection (PAD) plays a vital role in iris recognition systems. Most existing CNN-based iris PAD solutions 1) perform only binary label supervision during the training of CNNs, serving global information learning but weakening the capture of local discriminative features, 2) prefer the stacked deeper convolutions or expert-designed networks, raising the risk of overfitting, 3) fuse multiple PAD systems or various types of features, increasing difficulty for deployment on mobile devices. Hence, we propose a novel attention-based deep pixel-wise bi-nary supervision (A-PBS) method. Pixel-wise supervision is first able to capture the fine-grained pixel/patch-level cues. Then, the attention mechanism guides the network to automatically find regions that most contribute to an accurate PAD decision. Extensive experiments are performed on LivDet-Iris 2017 and three other publicly available databases to show the effectiveness and robustness of proposed A-PBS methods. For instance, the A-PBS model achieves an HTER of 6.50% on the IIITD-WVU database outperforming state-of-the-art methods.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130096426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Attention-guided Progressive Mapping for Profile Face Recognition 注意引导的渐进式人脸识别映射
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-06-27 DOI: 10.1109/IJCB52358.2021.9484342
Junyang Huang, Changxing Ding
{"title":"Attention-guided Progressive Mapping for Profile Face Recognition","authors":"Junyang Huang, Changxing Ding","doi":"10.1109/IJCB52358.2021.9484342","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484342","url":null,"abstract":"The past few years have witnessed great progress in the domain of face recognition thanks to advances in deep learning. However, cross pose face recognition remains a significant challenge. It is difficult for many deep learning algorithms to narrow the performance gap caused by pose variations; the main reasons for this relate to the intra-class discrepancy between face images in different poses and the pose imbalances of training datasets. Learning pose-robust features by traversing to the feature space of frontal faces provides an effective and cheap way to alleviate this problem. In this paper, we present a method for progressively transforming profile face representations to the canonical pose with an attentive pair-wise loss. First, to reduce the difficulty of directly transforming the profile face features into a frontal one, we propose to learn the feature residual between the source pose and its nearby pose in a block-by-block fashion, and thus traversing to the feature space of a smaller pose by adding the learned residual. Second, we propose an attentive pair-wise loss to guide the feature transformation progressing in the most effective direction. Finally, our proposed progressive module and attentive pair-wise loss are light-weight and easy to implement, adding only about 7.5% extra parameters. Evaluations on the CFP and CPLFW datasets demonstrate the superiority of our proposed method. Code is available at https://github.com/hjy1312/AGPM.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128304955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
FDeblur-GAN: Fingerprint Deblurring using Generative Adversarial Network 基于生成对抗网络的指纹去模糊
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-06-21 DOI: 10.1109/IJCB52358.2021.9484406
Amol S. Joshi, Ali Dabouei, J. Dawson, N. Nasrabadi
{"title":"FDeblur-GAN: Fingerprint Deblurring using Generative Adversarial Network","authors":"Amol S. Joshi, Ali Dabouei, J. Dawson, N. Nasrabadi","doi":"10.1109/IJCB52358.2021.9484406","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484406","url":null,"abstract":"While working with fingerprint images acquired from crime scenes, mobile cameras, or low-quality sensors, it becomes difficult for automated identification systems to verify the identity due to image blur and distortion. We propose a fingerprint deblurring model FDeblur-GAN, based on the conditional Generative Adversarial Networks (cGANs) and multi-stage framework of the stack GAN. Additionally, we integrate two auxiliary sub-networks into the model for the deblurring task. The first sub-network is a ridge extractor model. It is added to generate ridge maps to ensure that fingerprint information and minutiae are preserved in the deblurring process and prevent the model from generating erroneous minutiae. The second sub-network is a verifier that helps the generator to preserve the ID information during the generation process. Using a database of blurred fingerprints and corresponding ridge maps, the deep network learns to deblur from the input blurry samples. We evaluate the proposed method in combination with two different fingerprint matching algorithms. We achieved an accuracy of 95.18% on our fingerprint database for the task of matching deblurred and ground truth fingerprints.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116933126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Defending Touch-based Continuous Authentication Systems from Active Adversaries Using Generative Adversarial Networks 使用生成对抗网络保护基于触摸的连续认证系统免受主动对手的攻击
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-06-15 DOI: 10.1109/IJCB52358.2021.9484366
M. Agrawal, P. Mehrotra, Rajesh Kumar, R. Shah
{"title":"Defending Touch-based Continuous Authentication Systems from Active Adversaries Using Generative Adversarial Networks","authors":"M. Agrawal, P. Mehrotra, Rajesh Kumar, R. Shah","doi":"10.1109/IJCB52358.2021.9484366","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484366","url":null,"abstract":"Previous studies have demonstrated that commonly studied (vanilla) touch-based continuous authentication systems (V-TCAS) are susceptible to population attack. This paper proposes a novel Generative Adversarial Network assisted TCAS (G-TCAS) framework, which showed more resilience to the population attack. G-TCAS framework was tested on a dataset of 117 users who interacted with a smartphone and tablet pair. On average, the increase in the false accept rates (FARs) for V-TCAS was much higher (22%) than G-TCAS (13%) for the smartphone. Likewise, the increase in the FARs for V-TCAS was 25% compared to G-TCAS (6%) for the tablet.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"298 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132655544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Deception Detection and Remote Physiological Monitoring: A Dataset and Baseline Experimental Results 欺骗检测与远程生理监测:一个数据集和基线实验结果
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-06-11 DOI: 10.1109/IJCB52358.2021.9484409
Jeremy Speth, Nathan Vance, A. Czajka, K. Bowyer, Diane Wright, P. Flynn
{"title":"Deception Detection and Remote Physiological Monitoring: A Dataset and Baseline Experimental Results","authors":"Jeremy Speth, Nathan Vance, A. Czajka, K. Bowyer, Diane Wright, P. Flynn","doi":"10.1109/IJCB52358.2021.9484409","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484409","url":null,"abstract":"We present the Deception Detection and Physiological Monitoring (DDPM) dataset and initial baseline results on this dataset. Our application context is an interview scenario in which the interviewee attempts to deceive the interviewer on selected responses. The interviewee is recorded in RGB, near-infrared, and long-wave infrared, along with cardiac pulse, blood oxygenation, and audio. After collection, data were annotated for interviewer/interviewee, curated, ground-truthed, and organized into train / test parts for a set of canonical deception detection experiments. Baseline experiments found random accuracy for micro-expressions as an indicator of deception, but that saccades can give a statistically significant response. We also estimated subject heart rates from face videos (remotely) with a mean absolute error as low as 3.16 bpm. The database contains almost 13 hours of recordings of 70 subjects, and over 8 million visible-light, near-infrared, and thermal video frames, along with appropriate meta, audio and pulse oximeter data. To our knowledge, this is the only collection offering recordings of five modalities in an interview scenario that can be used in both deception detection and remote photoplethysmography research.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130550576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
MiDeCon: Unsupervised and Accurate Fingerprint and Minutia Quality Assessment based on Minutia Detection Confidence MiDeCon:基于细节检测置信度的无监督准确指纹和细节质量评估
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-06-10 DOI: 10.1109/IJCB52358.2021.9484404
P. Terhorst, A. Boller, N. Damer, Florian Kirchbuchner, Arjan Kuijper
{"title":"MiDeCon: Unsupervised and Accurate Fingerprint and Minutia Quality Assessment based on Minutia Detection Confidence","authors":"P. Terhorst, A. Boller, N. Damer, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/IJCB52358.2021.9484404","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484404","url":null,"abstract":"An essential factor to achieve high accuracies in finger-print recognition systems is the quality of its samples. Previous works mainly proposed supervised solutions based on image properties that neglects the minutiae extraction process, despite that most fingerprint recognition techniques are based on detected minutiae. Consequently, a fingerprint image might be assigned a high quality even if the utilized minutia extractor produces unreliable information. In this work, we propose a novel concept of assessing minutia and fingerprint quality based on minutia detection confidence (MiDeCon). MiDeCon can be applied to an arbitrary deep learning based minutia extractor and does not require quality labels for learning. We propose using the detection reliability of the extracted minutia as its quality indicator. By combining the highest minutia qualities, MiDeCon also accurately determines the quality of a full fingerprint. Experiments are conducted on the publicly available databases of the FVC 2006 and compared against several baselines, such as NIST’s widely-used fingerprint image quality software NFIQ1 and NFIQ2. The results demonstrate a significantly stronger quality assessment performance of the proposed MiDeCon-qualities as related works on both, minutia- and fingerprint-level. The implementation is publicly available.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125134720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
On the use of automatically generated synthetic image datasets for benchmarking face recognition 利用自动生成的合成图像数据集对人脸识别进行基准测试
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-06-08 DOI: 10.1109/IJCB52358.2021.9484363
Laurent Colbois, Tiago de Freitas Pereira, S. Marcel
{"title":"On the use of automatically generated synthetic image datasets for benchmarking face recognition","authors":"Laurent Colbois, Tiago de Freitas Pereira, S. Marcel","doi":"10.1109/IJCB52358.2021.9484363","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484363","url":null,"abstract":"The availability of large-scale face datasets has been key in the progress of face recognition. However, due to licensing issues or copyright infringement, some datasets are not available anymore (e.g. MS-Celeb-1M). Recent advances in Generative Adversarial Networks (GANs), to synthesize realistic face images, provide a pathway to replace real datasets by synthetic datasets, both to train and benchmark face recognition (FR) systems. The work presented in this paper provides a study on benchmarking FR systems using a synthetic dataset. First, we introduce the proposed methodology to generate a synthetic dataset, without the need for human intervention, by exploiting the latent structure of a StyleGAN2 model with multiple controlled factors of variation. Then, we confirm that (i) the generated synthetic identities are not data subjects from the GAN’s training dataset, which is verified on a synthetic dataset with 10K+ identities; (ii) benchmarking results on the synthetic dataset are a good substitution, often providing error rates and system ranking similar to the benchmarking on the real dataset.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128480780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
DFGC 2021: A DeepFake Game Competition DFGC 2021: DeepFake游戏竞赛
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-06-02 DOI: 10.1109/IJCB52358.2021.9484387
Bo Peng, Hongxing Fan, Wei Wang, Jing Dong, Yuezun Li, Siwei Lyu, Qi Li, Zhenan Sun, Han Chen, Baoying Chen, Yanjie Hu, Shenghai Luo, Junrui Huang, Yutong Yao, Boyuan Liu, H. Ling, Guo-jing Zhang, Zhi-liang Xu, Changtao Miao, Changlei Lu, Shan He, Xiaoyu Wu, Wanyi Zhuang
{"title":"DFGC 2021: A DeepFake Game Competition","authors":"Bo Peng, Hongxing Fan, Wei Wang, Jing Dong, Yuezun Li, Siwei Lyu, Qi Li, Zhenan Sun, Han Chen, Baoying Chen, Yanjie Hu, Shenghai Luo, Junrui Huang, Yutong Yao, Boyuan Liu, H. Ling, Guo-jing Zhang, Zhi-liang Xu, Changtao Miao, Changlei Lu, Shan He, Xiaoyu Wu, Wanyi Zhuang","doi":"10.1109/IJCB52358.2021.9484387","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484387","url":null,"abstract":"This paper presents a summary of the DeepFake Game Competition (DFGC) 20211. DeepFake technology is developing fast, and realistic face-swaps are increasingly deceiving and hard to detect. At the same time, DeepFake detection methods are also improving. There is a two-party game between DeepFake creators and detectors. This competition provides a common platform for benchmarking the adversarial game between current state-of-the-art DeepFake creation and detection methods. In this paper, we present the organization, results and top solutions of this competition and also share our insights obtained during this event. We also release the DFGC-21 testing dataset collected from our participants to further benefit the research community2.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117102354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Does Face Recognition Error Echo Gender Classification Error? 人脸识别错误是否与性别分类错误相呼应?
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-04-28 DOI: 10.1109/IJCB52358.2021.9484346
Y. Qiu, Vítor Albiero, Michael C. King, K. Bowyer
{"title":"Does Face Recognition Error Echo Gender Classification Error?","authors":"Y. Qiu, Vítor Albiero, Michael C. King, K. Bowyer","doi":"10.1109/IJCB52358.2021.9484346","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484346","url":null,"abstract":"This paper is the first to explore the question of whether images that are classified incorrectly by a face analytics algorithm (e.g., gender classification) are any more or less likely to participate in an image pair that results in a face recognition error. We analyze results from three different gender classification algorithms (one open-source and two commercial), and two face recognition algorithms (one open-source and one commercial), on image sets representing four demographic groups (African-American female and male, Caucasian female and male). For impostor image pairs, our results show that pairs in which one image has a gender classification error have a better impostor distribution than pairs in which both images have correct gender classification, and so are less likely to generate a false match error. For genuine image pairs, our results show that individuals whose images have a mix of correct and incorrect gender classification have a worse genuine distribution (increased false non-match rate) compared to individuals whose images consistently have correct gender classification. Thus, compared to images that generate correct gender classification, images with gender classification error have a lower false match rate and a higher false non-match rate.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131626310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Federated Learning-based Active Authentication on Mobile Devices 移动设备上基于联邦学习的主动认证
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-04-14 DOI: 10.1109/IJCB52358.2021.9484338
Poojan Oza, Vishal M. Patel
{"title":"Federated Learning-based Active Authentication on Mobile Devices","authors":"Poojan Oza, Vishal M. Patel","doi":"10.1109/IJCB52358.2021.9484338","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484338","url":null,"abstract":"User active authentication on mobile devices aims to learn a model that can correctly recognize the enrolled user based on device sensor information. Due to lack of negative class data, it is often modeled as a one-class classification problem. In practice, mobile devices are connected to a central server, e.g, all android-based devices are connected to Google server through internet. This device-server structure can be exploited by recently proposed Federated Learning (FL) and Split Learning (SL) frameworks to perform collaborative learning over the data distributed among multiple devices. Using FL/SL frameworks, we can alleviate the lack of negative data problem by training a user authentication model over multiple user data distributed across devices. To this end, we propose a novel user active authentication training, termed as Federated Active Authentication (FAA), that utilizes the principles of FL/SL. We first show that existing FL/SL methods are suboptimal for FAA as they rely on the data to be distributed homogeneously (i.e. IID) across devices, which is not true in the case of FAA. Subsequently, we propose a novel method that is able to tackle heterogeneous/non-IID distribution of data in FAA. Specifically, we first extract feature statistics such as mean and variance corresponding to data from each user which are later combined in a central server to learn a multi-class classifier and sent back to the individual devices. We conduct extensive experiments using three active authentication benchmark datasets (MOBIO, UMDAA-01, UMDAA-02) and show that such approach performs better than state-of-the-art one-class based FAA methods and is also able to outperform traditional FL/SL methods.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134382723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信