IEEE transactions on biometrics, behavior, and identity science最新文献

筛选
英文 中文
D-LORD: DYSL-AI Database for Low-Resolution Disguised Face Recognition D-LORD:用于低分辨率伪装人脸识别的 DYSL-AI 数据库
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-08-18 DOI: 10.1109/TBIOM.2023.3306703
Sunny Manchanda;Kaushik Bhagwatkar;Kavita Balutia;Shivang Agarwal;Jyoti Chaudhary;Muskan Dosi;Chiranjeev Chiranjeev;Mayank Vatsa;Richa Singh
{"title":"D-LORD: DYSL-AI Database for Low-Resolution Disguised Face Recognition","authors":"Sunny Manchanda;Kaushik Bhagwatkar;Kavita Balutia;Shivang Agarwal;Jyoti Chaudhary;Muskan Dosi;Chiranjeev Chiranjeev;Mayank Vatsa;Richa Singh","doi":"10.1109/TBIOM.2023.3306703","DOIUrl":"10.1109/TBIOM.2023.3306703","url":null,"abstract":"Face recognition in a low-resolution video stream captured from a surveillance camera is a challenging problem. The problem becomes even more complicated when the subjects appearing in the video wear disguise artifacts to hide their identity or try to impersonate someone. The lack of labeled datasets restricts the current research on low-resolution face recognition systems under disguise. With this paper, we propose a large-scale database, D-LORD, that will facilitate the research on face recognition. The proposed D-LORD dataset includes high-resolution mugshot images of 2,100 individuals and 14,098 low-resolution surveillance videos, collectively containing over 1.2 million frames. Each frame in the dataset has been annotated with five facial keypoints and a single bounding box for each face. In the videos, subjects’ faces are occluded by various disguise artifacts, such as face masks, sunglasses, wigs, hats, and monkey caps. To the best of our knowledge, D-LORD is the first database to address the complex problem of low-resolution face recognition with disguise variations. We also establish the benchmark results of several state-of-the-art face detectors, frame selection algorithms, face restoration, and face verification algorithms using well-structured experimental protocols on the D-LORD dataset. The research findings indicate that the Genuine Acceptance Rate (GAR) at 1% False Acceptance Rate (FAR) varies between 86.44% and 49.45% across different disguises and distances. The dataset is publicly available to the research community at \u0000<uri>https://dyslai.org/datasets/D-LORD/</uri>\u0000.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 2","pages":"147-157"},"PeriodicalIF":0.0,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90106719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Day Analysis of Wrist Electromyogram-Based Biometrics for Authentication and Personal Identification 基于腕部肌电图的生物识别技术用于身份验证和个人识别的多日分析
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-08-14 DOI: 10.1109/TBIOM.2023.3299948
Ashirbad Pradhan;Jiayuan He;Hyowon Lee;Ning Jiang
{"title":"Multi-Day Analysis of Wrist Electromyogram-Based Biometrics for Authentication and Personal Identification","authors":"Ashirbad Pradhan;Jiayuan He;Hyowon Lee;Ning Jiang","doi":"10.1109/TBIOM.2023.3299948","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3299948","url":null,"abstract":"Recently, electromyogram (EMG) has been proposed for addressing some key limitations of current biometrics. Wrist-worn wearable sensors can provide a non-invasive method for acquiring EMG signals for gesture recognition or biometric applications. EMG signals contain individuals’ information and can facilitate multi-length codes or passwords (for example, by performing a combination of hand gestures). However, current EMG-based biometric research has two critical limitations: small subject-pool for analysis and limited to single-session datasets. In this study, wrist EMG data were collected from 43 participants over three different days (Days 1, 8, and 29) while performing static hand/wrist gestures. Multi-day analysis involving training data and testing data from different days was employed to test the robustness of the EMG-based biometrics. The multi-day authentication resulted in a median equal error rate (EER) of 0.039 when the code is unknown, and an EER of 0.068 when the code is known to intruders. The multi-day identification achieved a median rank-5 accuracy of 93.0%. With intruders, a threshold-based identification resulted in a median rank-5 accuracy of 91.7% while intruders were denied access at a median rejection rate of 71.7%. These results demonstrated the potential of EMG-based biometrics in practical applications and bolster further research on EMG-based biometrics.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"553-565"},"PeriodicalIF":0.0,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10273758/10216354.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Relation Between ROC and CMC 论ROC与CMC的关系
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-25 DOI: 10.1109/TBIOM.2023.3298561
Raymond N. J. Veldhuis;Kiran Raja
{"title":"On the Relation Between ROC and CMC","authors":"Raymond N. J. Veldhuis;Kiran Raja","doi":"10.1109/TBIOM.2023.3298561","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3298561","url":null,"abstract":"We formulate a compact relation between the probabilistic Receiver Operating Characteristic (ROC) and the probabilistic Cumulative Match Characteristic (CMC) that predicts every entry of the probabilistic CMC as a functional on the probabilistic ROC. This result is shown to be valid for individual probabilistic ROCs and CMCs of single identities, based on the assumption that each identity has individual mated and nonmated Probabilitic Density Functions (PDF). Furthermore, it is shown that the relation still holds between the global probabilistic CMC of a gallery of identities and the average probabilistic ROC obtained by averaging the individual probabilistic ROCs of these identities involved over constant False Match Rates (FMR). We illustrate that the difference between individual probabilistic ROCs and the difference between global and average probabilistic ROCs provide an explanation for the discrepancies observed in the literature. The new formulation of the relation between probabilistic ROCs and CMCs allows us to prove that the probabilistic CMC plotted as a function of fractional rank, i.e., linearly compressed to a domain ranging from 0 to 1, will converge to the average probabilistic ROC when the gallery size increases. We illustrate our findings by experiments on synthetic and on face, fingerprint, and iris data.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"538-552"},"PeriodicalIF":0.0,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10273758/10194409.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Internal Structure Attention Network for Fingerprint Presentation Attack Detection From Optical Coherence Tomography 光学相干层析指纹表示攻击检测的内部结构注意网络
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-13 DOI: 10.1109/TBIOM.2023.3293910
Haohao Sun;Yilong Zhang;Peng Chen;Haixia Wang;Ronghua Liang
{"title":"Internal Structure Attention Network for Fingerprint Presentation Attack Detection From Optical Coherence Tomography","authors":"Haohao Sun;Yilong Zhang;Peng Chen;Haixia Wang;Ronghua Liang","doi":"10.1109/TBIOM.2023.3293910","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3293910","url":null,"abstract":"As a non-invasive optical imaging technique, optical coherence tomography (OCT) has proven promising for automatic fingerprint recognition system (AFRS) applications. Diverse approaches have been proposed for OCT-based fingerprint presentation attack detection (PAD). However, considering the complexity and variety of PA samples, it is extremely challenging to increase the generalization ability with the limited PA dataset. To solve the challenge, this paper presents a novel supervised learning-based PAD method, denoted as internal structure attention PAD (ISAPAD). ISAPAD applies prior knowledge to guide network training. Specifically, the proposed dual-branch architecture in ISAPAD can not only learn global features from the OCT images, but also concentrate on the layered structure feature which come from the internal structure attention module (ISAM). The simple yet effective ISAM enables the network to obtain layered segmentation features exclusively belonging to Bonafide from noisy OCT volume data. By incorporating effective training strategies and PAD score generation rules, ISAPAD ensures reliable PAD performance even with limited training data. Extensive experiments and visualization analysis substantiate the effectiveness of the proposed method for OCT PAD.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"524-537"},"PeriodicalIF":0.0,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information IEEE生物计量学、行为与身份科学学报
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-01 DOI: 10.1109/TBIOM.2023.3281994
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2023.3281994","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3281994","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 3","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10210132/10210209.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49966612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Best Paper Section IEEE International Conference on Automatic Face and Gesture Recognition 2021 2021年IEEE自动人脸和手势识别国际会议最佳论文部分
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-01 DOI: 10.1109/TBIOM.2023.3296348
Rachael E. Jack;Vishal M. Patel;Pavan Turaga;Mayank Vatsa;Rama Chellappa;Alex Pentland;Richa Singh
{"title":"Best Paper Section IEEE International Conference on Automatic Face and Gesture Recognition 2021","authors":"Rachael E. Jack;Vishal M. Patel;Pavan Turaga;Mayank Vatsa;Rama Chellappa;Alex Pentland;Richa Singh","doi":"10.1109/TBIOM.2023.3296348","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3296348","url":null,"abstract":"The IEEE International Conference on Automatic Face and Gesture Recognition (FG) is the premier international conference on vision-based automatic face and body behavior analysis and applications. Since the first meeting in Zurich in 1994, the FG conference has grown from a biennial conference to an annual meeting, presenting the advancements and latest research developments related to face and gesture analysis. FG2021 was planned to be an in-person meeting hosted in the historic city of Jodhpur, India. However, due to the COVID-19 pandemic situation, the organizing committee decided to hold FG2021 as an online conference from December 15 to 18, 2021. Over 142 papers were presented at FG2021 and based on the reviewers and area chair recommendations, PC Chairs invited a set of top reviewed papers as part of a special issue on “Best of Face & Gesture 2021” in the IEEE Transactions on Biometrics, Behavior, and Identity Science (T-BIOM). The meticulous review process of T-BIOM ensured that significantly extended research papers that were initially presented at FG2021 are included in this special issue. The nine accepted papers can be classified into three sets: (i) algorithms with 3D information based face/motion processing, (ii) algorithms towards head pose estimation, emotion recognition, differentiable rendering, dictionary attacks, and group detection, and (iii) the student engagement dataset for affect transfer learning for behavior prediction.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 3","pages":"305-307"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10210132/10210211.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49966611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors IEEE生物识别、行为和身份科学信息作者汇刊
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-07-01 DOI: 10.1109/TBIOM.2023.3281995
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2023.3281995","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3281995","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 3","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10210132/10210210.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modality Channel Mixup and Modality Decorrelation for RGB-Infrared Person Re-Identification rgb -红外人员再识别的跨模态信道混频与模态去相关
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-06-26 DOI: 10.1109/TBIOM.2023.3287275
Boyu Hua;Junyin Zhang;Ziqiang Li;Yongxin Ge
{"title":"Cross-Modality Channel Mixup and Modality Decorrelation for RGB-Infrared Person Re-Identification","authors":"Boyu Hua;Junyin Zhang;Ziqiang Li;Yongxin Ge","doi":"10.1109/TBIOM.2023.3287275","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3287275","url":null,"abstract":"This paper focuses on RGB-infrared person re-identification, which is challenged by a large modality gap between RGB and infrared images. Most existing methods attempt to learn discriminative modality-invariant features. These methods make use of identity annotations while they do not sufficiently exploit intra-modality and cross-modality sample relations using modality annotations. In this paper, we propose a Cross-modality channel Mixup and Modality Decorrelation method (CMMD) that explores sample relations at both image and feature levels. This method is designed to reduce redundant modality-specific information of the representations and highlight modality-shared information. Specifically, we first design a cross-modality channel mixup (CCM) augmentation at the image level, which combines a random RGB channel and an infrared image to generate a new one by mixup, while keeping identity information unchanged. This augmentation can be integrated into other methods easily without introducing extra parameters or models. In addition, modality decorrelation quintuplet loss (MDQL) is further presented to mine hard samples in a batch, that is, positive/negative intra/cross-modality samples, to learn modality-invariant representations in the shared latent space at the feature level. This loss suggests that the closest negative sample and the farthest positive sample should have an equal probability of appearing in both modalities. Comprehensive experimental results on two challenging datasets, i.e., SYSY-MM01 and RegDB, demonstrate competitive performance of our method with state-of-the-art ones.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"512-523"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quality-Aware Fusion of Multisource Internal and External Fingerprints Under Multisensor Acquisition 多传感器采集下多源内外指纹的质量感知融合
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-06-02 DOI: 10.1109/TBIOM.2023.3280961
Haixia Wang;Mengbo Shi;Ronghua Liang;Yilong Zhang;Jingjing Cui;Peng Chen
{"title":"Quality-Aware Fusion of Multisource Internal and External Fingerprints Under Multisensor Acquisition","authors":"Haixia Wang;Mengbo Shi;Ronghua Liang;Yilong Zhang;Jingjing Cui;Peng Chen","doi":"10.1109/TBIOM.2023.3280961","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3280961","url":null,"abstract":"Fingerprint is one of the most widely used biometric features. The researches on external fingerprint collected by total internal reflection and the internal fingerprint obtained by optical coherence tomography have been carried out extensively. Studies proved the consistency between them. The external fingerprint is susceptible to the fingertip status, whereas the internal fingerprint has strong anti-interference and anti-spoofing capabilities. They originate from different skin layers and are collected by multiple sensors. Given the corresponding advantages and disadvantages of external and internal fingerprints, their fusion can maximize effective information and improve fingerprint image quality; thus, this fusion is conducive to fingerprint identification. In this study, a fingerprint fusion method based on the quality-aware convolutional-sparsity-based morphological component analysis (CSMCA) is proposed. The proposed method realizes the fusion of multisource and multisensor fingerprints for the first time. Quality indexes, namely, spatial consistency, dryness, and humidity, are selected. Moreover, a simulation-based combination scheme is proposed for the pixel-level quality representation. The quality index is integrated with CSMCA for quality-aware fusion, which retains high-quality components and reduces low-quality areas. The experiments prove the superiority of the proposed method in quality scores and matching performances, indicating that internal fingerprints can amend external fingerprints. The matching experiments with either external or internal fingerprints show that the fused fingerprints can be compatible with the existing fingerprint databases. Our work can provide references and insights into identifying mutilated fingerprints in the future.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"498-511"},"PeriodicalIF":0.0,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MD-Pose: Human Pose Estimation for Single-Channel UWB Radar MD-Pose:单通道超宽带雷达人体姿态估计
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2023-04-10 DOI: 10.1109/TBIOM.2023.3265206
Xiaolong Zhou;Tian Jin;Yongpeng Dai;Yongkun Song;Zhifeng Qiu
{"title":"MD-Pose: Human Pose Estimation for Single-Channel UWB Radar","authors":"Xiaolong Zhou;Tian Jin;Yongpeng Dai;Yongkun Song;Zhifeng Qiu","doi":"10.1109/TBIOM.2023.3265206","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3265206","url":null,"abstract":"Human pose estimation based on optical sensors is difficult to solve the situation under harsh environments and shielding. In this paper, a Micro-Doppler (MD) based human pose estimation for the single-channel ultra-wideband (UWB) radar, called MD-Pose, is proposed. The MD characteristic reflects the kinematics of the human and provides a unique method for identifying the target’s posture, which offers a more comprehensive perception of human posture. We explore the relationship between the human skeleton and the MD signature, which reveals the fundamental origins of these previously unexplained phenomena. The single-channel UWB radar is widely used because of its small size, low cost, and portability. In contrast, its resolution is lower than that of the MIMO UWB radar. Therefore, this paper reveals how to implement fine-grained human posture based on the MD signature with fewer channels. The MD spectrogram of the human target is obtained by the short-time Fourier transform (STFT), which is the input data of the proposed MD-Pose. A quasi-symmetric U-Net neural network is trained with the UWB radar MD spectrogram, which can estimate the human keypoints. The experiments show comparable quantitative results with the state-of-the-art human pose estimation method and provide the underlying insights needed to guide the design of radar-based human pose estimation.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"449-463"},"PeriodicalIF":0.0,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信