2020 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Gait Recognition Based on 3D Skeleton Data and Graph Convolutional Network 基于三维骨骼数据和图卷积网络的步态识别
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304916
Mengge Mao, Yonghong Song
{"title":"Gait Recognition Based on 3D Skeleton Data and Graph Convolutional Network","authors":"Mengge Mao, Yonghong Song","doi":"10.1109/IJCB48548.2020.9304916","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304916","url":null,"abstract":"Gait recognition is a hot topic in the field of biometrics because of its unique advantages such as non-contact and long distance. The appearance-based gait recognition methods usually extract features from the silhouettes of human body, which are easy to be affected by factors such as clothing and carrying objects. Although the model-based methods can effectively reduce the influence of appearance factors, it has high computational complexity. Therefore, this paper proposes a gait recognition method based on the 3D skeleton data and graph convolutional network. The 3D skeleton data is robust to the change of view. In this paper, we extract 3D joint feature and 3D bone feature based on 3D skeleton data, design a dual graph convolutional network to extract corresponding gait features and fuse them at feature level. At the same time, we use a multi-loss strategy to combine center loss and softmax loss to optimize the network. Our method is evaluated on the dataset CASIA B. The experimental results show that the proposed method can achieve state-of-the-art performance, and it can effectively reduce the influence of view, clothing and other factors.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"44 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125687343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Specular- and Diffuse-reflection-based Face Spoofing Detection for Mobile Devices 基于镜面反射和漫反射的移动设备人脸欺骗检测
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304862
Akinori F. Ebihara, K. Sakurai, Hitoshi Imaoka
{"title":"Specular- and Diffuse-reflection-based Face Spoofing Detection for Mobile Devices","authors":"Akinori F. Ebihara, K. Sakurai, Hitoshi Imaoka","doi":"10.1109/IJCB48548.2020.9304862","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304862","url":null,"abstract":"In light of the rising demand for biometric-authentication systems, preventing face spoofing attacks is a critical issue for the safe deployment of face recognition systems. Here, we propose an efficient face presentation attack detection (PAD) algorithm that requires minimal hardware and only a small database, making it suitable for resource-constrained devices such as mobile phones. Utilizing one monocular visible light camera, the proposed algorithm takes two facial photos, one taken with a flash, the other without a flash. The proposed SpecDiff descriptor is constructed by leveraging two types of reflection: (i) specular reflections from the iris region that have a specific intensity distribution depending on liveness, and (ii) diffuse reflections from the entire face region that represents the 3D structure of a subject's face. Classifiers trained with SpecDiff descriptor outperforms other flash-based PAD algorithms on both an in-house database and on publicly available NUAA, Replay-Attack, and SiW databases. Moreover, the proposed algorithm achieves statistically significantly better accuracy to that of an end-to-end, deep neural network classifier, while being approximately six-times faster execution speed. The code is publicly available at https://github.com/Akinori-F-Ebihara/SpecDiff-spoofing-detector.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115562109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Learning to Learn Face-PAD: a lifelong learning approach 学习学习Face-PAD:终身学习方法
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304920
Daniel Pérez-Cabo, David Jiménez-Cabello, Artur Costa-Pazo, R. López-Sastre
{"title":"Learning to Learn Face-PAD: a lifelong learning approach","authors":"Daniel Pérez-Cabo, David Jiménez-Cabello, Artur Costa-Pazo, R. López-Sastre","doi":"10.1109/IJCB48548.2020.9304920","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304920","url":null,"abstract":"A face presentation attack detection (face-PAD) system is in charge of determining whether a face corresponds to a presentation attack or not. The vast majority of proposed solutions consider a static scenario, where models are trained and evaluated in datasets where all types of attacks and conditions are known beforehand. However, in a real-world scenario, the situation is very different. There, for instance, the types of attacks change over time, with new impersonation situations appearing for which little training data is available. In this paper we propose to tackle these problems presenting for the first time a con-tinuallearning framework for PAD. We introduce a continual meta-learning PAD solution that can be trained on new attack scenarios, following the continual few-shot learning paradigm, where the model uses only a small number of training samples. We also provide a thorough experimental evaluation using the GRAD-GPAD benchmark. Our results confirm the benefits of applying a continual meta-learning model to the real-world PAD scenario. Interestingly, the accuracy of our solution, which is continuously trained, where data from new attacks arrive sequentially, is capable of recovering the accuracy achieved by a traditional solution that has all the data from all possible attacks from the beginning. In addition, our experiments show that when these traditional PAD solutions are trained on new attacks, using a standard fine-tuning process, they suffer from catastrophic forgetting while our model does not.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116733390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Dense-View GEIs Set: View Space Covering for Gait Recognition based on Dense-View GAN 密集视图GEIs集:基于密集视图GAN的步态识别视图空间覆盖
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-26 DOI: 10.1109/IJCB48548.2020.9304910
Rijun Liao, Weizhi An, Shiqi Yu, Zhu Li, Yongzhen Huang
{"title":"Dense-View GEIs Set: View Space Covering for Gait Recognition based on Dense-View GAN","authors":"Rijun Liao, Weizhi An, Shiqi Yu, Zhu Li, Yongzhen Huang","doi":"10.1109/IJCB48548.2020.9304910","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304910","url":null,"abstract":"Gait recognition has proven to be effective for long-distance human recognition. But view variance of gait features would change human appearance greatly and reduce its performance. Most existing gait datasets usually collect data with a dozen different angles, or even more few. Limited view angles would prevent learning better view invariant feature. It can further improve robustness of gait recognition if we collect data with various angles at 1° interval. But it is time consuming and labor consuming to collect this kind of dataset. In this paper, we, therefore, introduce a Dense-View GEIs Set (DV-GEIs) to deal with the challenge of limited view angles. This set can cover the whole view space, view angle from 0° to 180° with 1° interval. In addition, Dense-View GAN (DV-GAN) is proposed to synthesize this dense view set. DV-GAN consists of Generator, Discriminator and Monitor, where Monitor is designed to preserve human identification and view information. The proposed method is evaluated on the CASIA-B and OU-ISIR dataset. The experimental results show that DV-GEIs synthesized by DV-GAN is an effective way to learn better view invariant feature. We believe the idea of dense view generated samples will further improve the development of gait recognition.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125103610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Modeling Score Distributions and Continuous Covariates: A Bayesian Approach 评分分布和连续协变量建模:贝叶斯方法
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-21 DOI: 10.1109/IJCB48548.2020.9304938
Mel McCurrie, Hamish Nicholson, W. Scheirer, Samuel E. Anthony
{"title":"Modeling Score Distributions and Continuous Covariates: A Bayesian Approach","authors":"Mel McCurrie, Hamish Nicholson, W. Scheirer, Samuel E. Anthony","doi":"10.1109/IJCB48548.2020.9304938","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304938","url":null,"abstract":"Computer Vision practitioners must thoroughly understand their model's performance, but conditional evaluation is complex and error-prone. In biometric verification, model performance over continuous covariates - known, real-number attributes of images that affect performance - is particularly challenging to study. We develop a generative model of the match and non-match score distributions over continuous covariates and perform inference with modern Bayesian methods. We use mixture models to capture arbitrary distributions and local basis functions to capture non-linear, multivariate trends. Three experiments demonstrate the accuracy and effectiveness of our approach. First, we study the relationship between age and face verification performance and find previous methods may overstate performance and confidence. Second, we study preprocessing for CNNs and find a highly non-linear, multivariate surface of model performance. Our method is accurate and data efficient when evaluated against previous synthetic methods. Third, we demonstrate the novel application of our method to pedestrian tracking and calculate variable thresholds and expected performance while controlling for multiple covariates.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115843039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Beyond Identity: What Information Is Stored in Biometric Face Templates? 超越身份:生物识别面部模板中存储了什么信息?
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-21 DOI: 10.1109/IJCB48548.2020.9304874
P. Terhorst, Daniel Fahrmann, N. Damer, Florian Kirchbuchner, Arjan Kuijper
{"title":"Beyond Identity: What Information Is Stored in Biometric Face Templates?","authors":"P. Terhorst, Daniel Fahrmann, N. Damer, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/IJCB48548.2020.9304874","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304874","url":null,"abstract":"Deeply-learned face representations enable the success of current face recognition systems. Despite the ability of these representations to encode the identity of an individual, recent works have shown that more information is stored within, such as demographics, image characteristics, and social traits. This threatens the user's privacy, since for many applications these templates are expected to be solely used for recognition purposes. Knowing the encoded information in face templates helps to develop bias-mitigating and privacy-preserving face recognition technologies. This work aims to support the development of these two branches by analysing face templates regarding 113 attributes. Experiments were conducted on two publicly available face embeddings. For evaluating the predictability of the attributes, we trained a massive attribute classifier that is additionally able to accurately state its prediction confidence. This allows us to make more sophisticated statements about the attribute predictability. The results demonstrate that up to 74 attributes can be accurately predicted from face templates. Especially non-permanent attributes, such as age, hairstyles, haircolors, beards, and various accessories, found to be easily-predictable. Since face recognition systems aim to be robust against these variations, future research might build on this work to develop more understandable privacy preserving solutions and build robust and fair face templates.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132721235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Analysis of Dilation in Children and its Impact on Iris Recognition 儿童虹膜扩张及其对虹膜识别的影响分析
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-02 DOI: 10.1109/IJCB48548.2020.9304911
Priyanka Das, Laura Holsopple, S. Schuckers, M. Schuckers
{"title":"Analysis of Dilation in Children and its Impact on Iris Recognition","authors":"Priyanka Das, Laura Holsopple, S. Schuckers, M. Schuckers","doi":"10.1109/IJCB48548.2020.9304911","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304911","url":null,"abstract":"The dilation of the pupil and it's variation between a mated pair of irides has been found to be an important factor in the performance of iris recognition systems. Studies on adult irides indicated significant impact of dilation on iris recognition performance at different ages. However, the results of adults may not necessarily translate to children. This study analyzes dilation as a factor of age and over time in children, from data collected from same 209 subjects in the age group of four to 11 years at enrollment, longitudinally over three years spaced by six months. The performance of iris recognition is also analyzed in presence of dilation variation.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126507961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Recognition Oriented Iris Image Quality Assessment in the Feature Space 特征空间中面向识别的虹膜图像质量评价
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-01 DOI: 10.1109/IJCB48548.2020.9304896
Leyuan Wang, Kunbo Zhang, Min Ren, Yunlong Wang, Zhenan Sun
{"title":"Recognition Oriented Iris Image Quality Assessment in the Feature Space","authors":"Leyuan Wang, Kunbo Zhang, Min Ren, Yunlong Wang, Zhenan Sun","doi":"10.1109/IJCB48548.2020.9304896","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304896","url":null,"abstract":"A large portion of iris images captured in real world scenarios are poor quality due to the uncontrolled environment and the non-cooperative subject. To ensure that the recognition algorithm is not affected by low-quality images, traditional hand-crafted factors based methods discard most images, which will cause system timeout and disrupt user experience. In this paper, we propose a recognition-oriented quality metric and assessment method for iris image to deal with the problem. The method regards the iris image em-beddings Distance in Feature Space (DFS) as the quality metric and the prediction is based on deep neural networks with the attention mechanism. The quality metric proposed in this paper can significantly improve the performance of the recognition algorithm while reducing the number of images discarded for recognition, which is advantageous over hand-crafted factors based iris quality assessment methods. The relationship between Image Rejection Rate (IRR) and Equal Error Rate (EER) is proposed to evaluate the performance of the quality assessment algorithm under the same image quality distribution and the same recognition algorithm. Compared with hand-crafted factors based methods, the proposed method is a trial to bridge the gap between the image quality assessment and biometric recognition.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114226644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Iris Liveness Detection Competition (LivDet-Iris) - The 2020 Edition 虹膜活性检测大赛(LivDet-Iris) - 2020年版
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-01 DOI: 10.1109/IJCB48548.2020.9304941
Priyanka Das, Joseph McGrath, Zhaoyuan Fang, Aidan Boyd, Ganghee Jang, A. Mohammadi, Sandip Purnapatra, David Yambay, S. Marcel, Mateusz Trokielewicz, P. Maciejewicz, K. Bowyer, A. Czajka, S. Schuckers, Juan E. Tapia, Sebastián González, Meiling Fang, N. Damer, F. Boutros, Arjan Kuijper, Renu Sharma, Cunjian Chen, A. Ross
{"title":"Iris Liveness Detection Competition (LivDet-Iris) - The 2020 Edition","authors":"Priyanka Das, Joseph McGrath, Zhaoyuan Fang, Aidan Boyd, Ganghee Jang, A. Mohammadi, Sandip Purnapatra, David Yambay, S. Marcel, Mateusz Trokielewicz, P. Maciejewicz, K. Bowyer, A. Czajka, S. Schuckers, Juan E. Tapia, Sebastián González, Meiling Fang, N. Damer, F. Boutros, Arjan Kuijper, Renu Sharma, Cunjian Chen, A. Ross","doi":"10.1109/IJCB48548.2020.9304941","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304941","url":null,"abstract":"Launched in 2013, LivDet-Iris is an international competition series open to academia and industry with the aim to assess and report advances in iris Presentation Attack Detection (PAD). This paper presents results from the fourth competition of the series: LivDet-Iris 2020. This year's competition introduced several novel elements: (a) incorporated new types of attacks (samples displayed on a screen, cadaver eyes and prosthetic eyes), (b) initiated LivDet-Iris as an on-going effort, with a testing protocol available now to everyone via the Biometrics Evaluation and Testing (BEAT)* open-source platform to facilitate reproducibility and benchmarking of new algorithms continuously, and (c) performance comparison of the submitted entries with three baseline methods (offered by the University of Notre Dame and Michigan State University), and three open-source iris PAD methods available in the public domain. The best performing entry to the competition reported a weighted average APCER of 59.10% and a BPCER of 0.46% over all five attack types. This paper serves as the latest evaluation of iris PAD on a large spectrum of presentation attack instruments.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124804888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
iLGaCo: Incremental Learning of Gait Covariate Factors 步态协变量因素的增量学习
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-08-31 DOI: 10.1109/IJCB48548.2020.9304857
Zihao Mu, F. M. Castro, M. Marín-Jiménez, Nicolás Guil Mata, Yan-Ran Li, Shiqi Yu
{"title":"iLGaCo: Incremental Learning of Gait Covariate Factors","authors":"Zihao Mu, F. M. Castro, M. Marín-Jiménez, Nicolás Guil Mata, Yan-Ran Li, Shiqi Yu","doi":"10.1109/IJCB48548.2020.9304857","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304857","url":null,"abstract":"Gait is a popular biometric pattern used for identifying people based on their way of walking. Traditionally, gait recognition approaches based on deep learning are trained using the whole training dataset. In fact, if new data (classes, view-points, walking conditions, etc.) need to be included, it is necessary to re-train again the model with old and new data samples. In this paper, we propose iLGaCo, the first incremental learning approach of covariate factors for gait recognition, where the deep model can be updated with new information without re-training it from scratch by using the whole dataset. Instead, our approach performs a shorter training process with the new data and a small subset of previous samples. This way, our model learns new information while retaining previous knowledge. We evaluate iLGaCo on CASIA-B dataset in two incremental ways: adding new view-points and adding new walking conditions. In both cases, our results are close to the classical ‘training-from-scratch’ approach, obtaining a marginal drop in accuracy ranging from 0.2% to 1.2%, what shows the efficacy of our approach. In addition, the comparison of iLGaCo with other incremental learning methods, such as LwF and iCarl, shows a significant improvement in accuracy, between 6% and 15% depending on the experiment.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"178 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126187773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信