2019 International Conference on Biometrics (ICB)最新文献

筛选
英文 中文
FaceQnet: Quality Assessment for Face Recognition based on Deep Learning FaceQnet:基于深度学习的人脸识别质量评估
2019 International Conference on Biometrics (ICB) Pub Date : 2019-04-03 DOI: 10.1109/ICB45273.2019.8987255
J. Hernandez-Ortega, Javier Galbally, Julian Fierrez, Rudolf Haraksim, Laurent Beslay
{"title":"FaceQnet: Quality Assessment for Face Recognition based on Deep Learning","authors":"J. Hernandez-Ortega, Javier Galbally, Julian Fierrez, Rudolf Haraksim, Laurent Beslay","doi":"10.1109/ICB45273.2019.8987255","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987255","url":null,"abstract":"In this paper we develop a Quality Assessment approach for face recognition based on deep learning. The method consists of a Convolutional Neural Network, FaceQnet, that is used to predict the suitability of a specific input image for face recognition purposes. The training of FaceQnet is done using the VGGFace2 database. We employ the BioLab-ICAO framework for labeling the VGGFace2 images with quality information related to their ICAO compliance level. The groundtruth quality labels are obtained using FaceNet to generate comparison scores. We employ the groundtruth data to fine-tune a ResNet-based CNN, making it capable of returning a numerical quality measure for each input image. Finally, we verify if the FaceQnet scores are suitable to predict the expected performance when employing a specific image for face recognition with a COTS face recognition system. Several conclusions can be drawn from this work, most notably: 1) we managed to employ an existing ICAO compliance framework and a pretrained CNN to automatically label data with quality information, 2) we trained FaceQnet for quality estimation by fine-tuning a pre-trained face recognition network (ResNet-50), and 3) we have shown that the predictions from FaceQnet are highly correlated with the face recognition accuracy of a state-of-the-art commercial system not used during development. FaceQnet is publicly available in GitHub1.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128360543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 100
OGCTL: Occlusion-guided compact template learning for ensemble deep network-based pose-invariant face recognition 基于集成深度网络的姿态不变人脸识别的闭塞引导紧凑模板学习
2019 International Conference on Biometrics (ICB) Pub Date : 2019-03-12 DOI: 10.1109/ICB45273.2019.8987272
Yuhang Wu, I. Kakadiaris
{"title":"OGCTL: Occlusion-guided compact template learning for ensemble deep network-based pose-invariant face recognition","authors":"Yuhang Wu, I. Kakadiaris","doi":"10.1109/ICB45273.2019.8987272","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987272","url":null,"abstract":"Concatenation of the deep network representations extracted from different facial patches helps to improve face recognition performance. However, the concatenated facial template increases in size and contains redundant information. Previous solutions aim to reduce the dimensionality of the facial template without considering the occlusion pattern of the facial patches. In this paper, we propose an occlusion-guided compact template learning (OGCTL) approach that only uses the information from visible patches to construct the compact template. The compact face representation is not sensitive to the number of patches that are used to construct the facial template, and is more suitable for incorporating the information from different view angles for image-set based face recognition. Instead of using occlusion masks in face matching (e.g., DPRFS [38]), the proposed method uses occlusion masks in template construction and achieves significantly better image-set based face verification performance on a challenging database with a template size that is an order-of-magnitude smaller than DPRFS.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122714497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Unconstrained Ear Recognition Challenge 2019 2019年无约束耳识别挑战赛
2019 International Conference on Biometrics (ICB) Pub Date : 2019-03-11 DOI: 10.1109/ICB45273.2019.8987337
Ž. Emeršič, S. V. A. Kumar, B. Harish, Weronika Gutfeter, J. Khiarak, A. Pacut, E. Hansley, Maurício Pamplona Segundo, Sudeep Sarkar, Hyeon-Nam Park, G. Nam, Ig-Jae Kim, S. G. Sangodkar, Umit Kacar, M. Kirci, Li Yuan, Jishou Yuan, Haonan Zhao, Fei Lu, Junying Mao, Xiaoshuang Zhang, Dogucan Yaman, Fevziye Irem Eyiokur, Kadir Bulut Özler, H. K. Ekenel, D. P. Chowdhury, Sambit Bakshi, B. Majhi, P. Peer, V. Štruc
{"title":"The Unconstrained Ear Recognition Challenge 2019","authors":"Ž. Emeršič, S. V. A. Kumar, B. Harish, Weronika Gutfeter, J. Khiarak, A. Pacut, E. Hansley, Maurício Pamplona Segundo, Sudeep Sarkar, Hyeon-Nam Park, G. Nam, Ig-Jae Kim, S. G. Sangodkar, Umit Kacar, M. Kirci, Li Yuan, Jishou Yuan, Haonan Zhao, Fei Lu, Junying Mao, Xiaoshuang Zhang, Dogucan Yaman, Fevziye Irem Eyiokur, Kadir Bulut Özler, H. K. Ekenel, D. P. Chowdhury, Sambit Bakshi, B. Majhi, P. Peer, V. Štruc","doi":"10.1109/ICB45273.2019.8987337","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987337","url":null,"abstract":"This paper presents a summary of the 2019 Unconstrained Ear Recognition Challenge (UERC), the second in a series of group benchmarking efforts centered around the problem of person recognition from ear images captured in uncontrolled settings. The goal of the challenge is to assess the performance of existing ear recognition techniques on a challenging large-scale ear dataset and to analyze performance of the technology from various viewpoints, such as generalization abilities to unseen data characteristics, sensitivity to rotations, occlusions and image resolution and performance bias on sub-groups of subjects, selected based on demographic criteria, i.e. gender and ethnicity. Research groups from 12 institutions entered the competition and submitted a total of 13 recognition approaches ranging from descriptor-based methods to deep-learning models. The majority of submissions focused on ensemble based methods combining either representations from multiple deep models or hand-crafted with learned image descriptors. Our analysis shows that methods incorporating deep learning models clearly outperform techniques relying solely on hand-crafted descriptors, even though both groups of techniques exhibit similar behavior when it comes to robustness to various covariates, such presence of occlusions, changes in (head) pose, or variability in image resolution. The results of the challenge also show that there has been considerable progress since the first UERC in 2017, but that there is still ample room for further research in this area.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130997915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
RoPAD: Robust Presentation Attack Detection through Unsupervised Adversarial Invariance 基于无监督对抗不变性的鲁棒表示攻击检测
2019 International Conference on Biometrics (ICB) Pub Date : 2019-03-08 DOI: 10.1109/ICB45273.2019.8987276
Ayush Jaiswal, Shuai Xia, I. Masi, Wael AbdAlmageed
{"title":"RoPAD: Robust Presentation Attack Detection through Unsupervised Adversarial Invariance","authors":"Ayush Jaiswal, Shuai Xia, I. Masi, Wael AbdAlmageed","doi":"10.1109/ICB45273.2019.8987276","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987276","url":null,"abstract":"For enterprise, personal and societal applications, there is now an increasing demand for automated authentication of identity from images using computer vision. However, current authentication technologies are still vulnerable to presentation attacks. We present RoPAD, an end-to-end deep learning model for presentation attack detection that employs unsupervised adversarial invariance to ignore visual distractors in images for increased robustness and reduced overfitting. Experiments show that the proposed framework exhibits state-of-the-art performance on presentation attack detection on several benchmark datasets.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129120199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Video Face Recognition: Component-wise Feature Aggregation Network (C-FAN) 视频人脸识别:组件特征聚合网络(C-FAN)
2019 International Conference on Biometrics (ICB) Pub Date : 2019-02-19 DOI: 10.1109/ICB45273.2019.8987385
Sixue Gong, Yichun Shi, N. Kalka, Anil K. Jain
{"title":"Video Face Recognition: Component-wise Feature Aggregation Network (C-FAN)","authors":"Sixue Gong, Yichun Shi, N. Kalka, Anil K. Jain","doi":"10.1109/ICB45273.2019.8987385","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987385","url":null,"abstract":"We propose a new approach to video face recognition. Our component-wise feature aggregation network (C-FAN) accepts a set of face images of a subject as an input, and outputs a single feature vector as the face representation of the set for the recognition task. The whole network is trained in two steps: (i) train a base CNN for still image face recognition; (ii) add an aggregation module to the base network to learn the quality value for each feature component, which adaptively aggregates deep feature vectors into a single vector to represent the face in a video. C-FAN automatically learns to retain salient face features with high quality scores while suppressing features with low quality scores. The experimental results on three benchmark datasets, YouTube Faces [39], IJB-A [13], and IJB-S [12] show that the proposed C-FAN network is capable of generating a compact feature vector with 512 dimensions for a video sequence by efficiently aggregating feature vectors of all the video frames to achieve state of the art performance.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125279793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Periocular Recognition in the Wild with Orthogonal Combination of Local Binary Coded Pattern in Dual-stream Convolutional Neural Network 基于双流卷积神经网络局部二值编码模式正交组合的野外眼周识别
2019 International Conference on Biometrics (ICB) Pub Date : 2019-02-18 DOI: 10.1109/ICB45273.2019.8987278
L. Tiong, A. Teoh, Yunli Lee
{"title":"Periocular Recognition in the Wild with Orthogonal Combination of Local Binary Coded Pattern in Dual-stream Convolutional Neural Network","authors":"L. Tiong, A. Teoh, Yunli Lee","doi":"10.1109/ICB45273.2019.8987278","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987278","url":null,"abstract":"In spite of the advancements made in the periocular recognition, the dataset and periocular recognition in the wild remains a challenge. In this paper, we propose a multilayer fusion approach by means of a pair of shared parameters (dual-stream) convolutional neural network where each network accepts RGB data and a novel colour-based texture descriptor, namely Orthogonal Combination-Local Binary Coded Pattern (OC-LBCP) for periocular recognition in the wild. Specifically, two distinct late-fusion layers are introduced in the dual-stream network to aggregate the RGB data and OC-LBCP. Thus, the network beneficial from this new feature of the late-fusion layers for accuracy performance gain. We also introduce and share a new dataset for periocular in the wild, namely Ethnic-ocular dataset for benchmarking. The proposed network has also been assessed on one publicly available dataset, namely UBIPr. The proposed network outperforms several competing approaches on these datasets.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134614048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Actions Speak Louder Than (Pass)words: Passive Authentication of Smartphone* Users via Deep Temporal Features 行动比(通过)话语更响亮:智能手机用户通过深度时间特征的被动认证
2019 International Conference on Biometrics (ICB) Pub Date : 2019-01-16 DOI: 10.1109/ICB45273.2019.8987433
Debayan Deb, A. Ross, Anil K. Jain, K. Prakah-Asante, K. Prasad
{"title":"Actions Speak Louder Than (Pass)words: Passive Authentication of Smartphone* Users via Deep Temporal Features","authors":"Debayan Deb, A. Ross, Anil K. Jain, K. Prakah-Asante, K. Prasad","doi":"10.1109/ICB45273.2019.8987433","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987433","url":null,"abstract":"Prevailing user authentication schemes on smartphones rely on explicit user interaction, where a user types in a passcode or presents a biometric cue such as face, fingerprint, or iris. In addition to being cumbersome and obtrusive to the users, such authentication mechanisms pose security and privacy concerns. Passive authentication systems can tackle these challenges by unobtrusively monitoring the user’s interaction with the device. We propose a Siamese Long Short-Term Memory (LSTM) network architecture for passive authentication, where users can be verified without requiring any explicit authentication step. On a dataset comprising of measurements from 30 smartphone sensor modalities for 37 users, we evaluate our approach on 8 dominant modalities, namely, keystroke dynamics, GPS location, accelerometer, gyroscope, magnetometer, linear accelerometer, gravity, and rotation sensors. Experimental results find that a genuine user can be correctly verified 96.47% a false accept rate of 0.1% within 3 seconds.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127197625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Generalizing Fingerprint Spoof Detector: Learning a One-Class Classifier 泛化指纹欺骗检测器:学习一类分类器
2019 International Conference on Biometrics (ICB) Pub Date : 2019-01-13 DOI: 10.1109/ICB45273.2019.8987319
Joshua J. Engelsma, Anil K. Jain
{"title":"Generalizing Fingerprint Spoof Detector: Learning a One-Class Classifier","authors":"Joshua J. Engelsma, Anil K. Jain","doi":"10.1109/ICB45273.2019.8987319","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987319","url":null,"abstract":"Prevailing fingerprint recognition systems are vulnerable to spoof attacks. To mitigate these attacks, automated spoof detectors are trained to distinguish a set of live or bona fide fingerprints from a set of known spoof fingerprints. Despite their success, spoof detectors remain vulnerable when exposed to attacks from spoofs made with materials not seen during training of the detector. To alleviate this shortcoming, we approach spoof detection as a one-class classification problem. The goal is to train a spoof detector on only the live fingerprints such that once the concept of \"live\" has been learned, spoofs of any material can be rejected. We accomplish this through training multiple generative adversarial networks (GANS) on live fingerprint images acquired with the open source, dual-camera, 1900 ppi RaspiReader fingerprint reader. Our experimental results, conducted on 5.5K spoof images (from 12 materials) and 11.8K live images show that the proposed approach improves the cross-material spoof detection performance over state-of-the-art one-class and binary class spoof detectors on 11 of 12 testing materials and 7 of 12 testing materials, respectively.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126811186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Learning-Free Iris Segmentation Revisited: A First Step Toward Fast Volumetric Operation Over Video Samples 重新访问无学习虹膜分割:对视频样本快速体积操作的第一步
2019 International Conference on Biometrics (ICB) Pub Date : 2019-01-06 DOI: 10.1109/ICB45273.2019.8987377
Jeffery Kinnison, Mateusz Trokielewicz, Camila Carballo, A. Czajka, W. Scheirer
{"title":"Learning-Free Iris Segmentation Revisited: A First Step Toward Fast Volumetric Operation Over Video Samples","authors":"Jeffery Kinnison, Mateusz Trokielewicz, Camila Carballo, A. Czajka, W. Scheirer","doi":"10.1109/ICB45273.2019.8987377","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987377","url":null,"abstract":"Subject matching performance in iris biometrics is contingent upon fast, high-quality iris segmentation. In many cases, iris biometrics acquisition equipment takes a number of images in sequence and combines the segmentation and matching results for each image to strengthen the result. To date, segmentation has occurred in 2D, operating on each image individually. But such methodologies, while powerful, do not take advantage of potential gains in performance afforded by treating sequential images as volumetric data. As a first step in this direction, we apply the Flexible Learning-Free Reconstructoin of Neural Volumes (FLoRIN) framework, an open source segmentation and reconstruction framework originally designed for neural microscopy volumes, to volumetric segmentation of iris videos. Further, we introduce a novel dataset of near-infrared iris videos, in which each subject’s pupil rapidly changes size due to visible-light stimuli, as a test bed for FLoRIN. We compare the matching performance for iris masks generated by FLoRIN, deep-learning-based (SegNet), and Daugman’s (OSIRIS) iris segmentation approaches. We show that by incorporating volumetric information, FLoRIN achieves a factor of 3.6 to an order of magnitude increase in throughput with only a minor drop in subject matching performance. We also demonstrate that FLoRIN-based iris segmentation maintains this speedup on low-resource hardware, making it suitable for embedded biometrics systems.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126495708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Iris Recognition with Image Segmentation Employing Retrained Off-the-Shelf Deep Neural Networks 虹膜识别与图像分割采用再训练现成的深度神经网络
2019 International Conference on Biometrics (ICB) Pub Date : 2019-01-04 DOI: 10.1109/ICB45273.2019.8987299
Daniel Kerrigan, Mateusz Trokielewicz, A. Czajka, K. Bowyer
{"title":"Iris Recognition with Image Segmentation Employing Retrained Off-the-Shelf Deep Neural Networks","authors":"Daniel Kerrigan, Mateusz Trokielewicz, A. Czajka, K. Bowyer","doi":"10.1109/ICB45273.2019.8987299","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987299","url":null,"abstract":"This paper offers three new, open-source, deep learning-based iris segmentation methods, and the methodology how to use irregular segmentation masks in a conventional Gabor-wavelet-based iris recognition. To train and validate the methods, we used a wide spectrum of iris images acquired by different teams and different sensors and offered publicly, including data taken from CASIA-Iris-Interval-v4, BioSec, ND-Iris-0405, UBIRIS, Warsaw-BioBase-Post-Mortem-Iris v2.0 (post-mortem iris images), and ND-TWINS-2009-2010 (iris images acquired from identical twins). This varied training data should increase the generalization capabilities of the proposed segmentation techniques. In database-disjoint training and testing, we show that deep learning-based segmentation outperforms the conventional (OSIRIS) segmentation in terms of Intersection over Union calculated between the obtained results and manually annotated ground-truth. Interestingly, the Gabor-based iris matching is not always better when deep learning-based segmentation is used, and is on par with the method employing Daugman’s based segmentation.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131273751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信