IEEE transactions on biometrics, behavior, and identity science最新文献

筛选
英文 中文
Diving Into Sample Selection for Facial Expression Recognition With Noisy Annotations 带噪声注释的面部表情识别样本选择研究
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-07-29 DOI: 10.1109/TBIOM.2024.3435498
Wei Nie;Zhiyong Wang;Xinming Wang;Bowen Chen;Hanlin Zhang;Honghai Liu
{"title":"Diving Into Sample Selection for Facial Expression Recognition With Noisy Annotations","authors":"Wei Nie;Zhiyong Wang;Xinming Wang;Bowen Chen;Hanlin Zhang;Honghai Liu","doi":"10.1109/TBIOM.2024.3435498","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3435498","url":null,"abstract":"Real-world Facial Expression Recognition (FER) suffers from noisy labels due to ambiguous expressions and subjective annotation. Overall, addressing noisy label FER involves two core issues: the efficient utilization of clean samples and the effective utilization of noisy samples. However, existing methods demonstrate their effectiveness solely through the generalization improvement by using all corrupted data, making it difficult to ascertain whether the observed improvement genuinely addresses these two issues. To decouple this dilemma, this paper focuses on efficiently utilizing clean samples by diving into sample selection. Specifically, we enhance the classical noisy label learning method Co-divide with two straightforward modifications, introducing a noisy label discriminator more suitable for FER termed IntraClass-divide. Firstly, IntraClass-divide constructs a class-separate two-component Gaussian Mixture Model (GMM) for each category instead of a shared GMM for all categories. Secondly, IntraClass-divide simplifies the framework by eliminating the dual-network training scheme. In addition to achieving the leading sample selection performance of nearly 95% Micro-F1 in standard synthetic noise paradigm, we first propose a natural noise paradigm and also achieve a leading sample selection performance of 82.63% Micro-F1. Moreover, we train a ResNet18 with the clean samples identified by IntraClass-divide yields better generalization performance than previous sophisticated noisy label FER models trained on all corrupted data.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"95-107"},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message From the Editor-in-Chief 主编致辞
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-07-18 DOI: 10.1109/TBIOM.2024.3420490
Nalini Ratha
{"title":"Message From the Editor-in-Chief","authors":"Nalini Ratha","doi":"10.1109/TBIOM.2024.3420490","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3420490","url":null,"abstract":"My three-year tenure as the Editor-in-Chief (EiC) of the IEEE Transactions on Biometrics, Behavior, and Identity Science (T-BIOM) draws to a close this June 2024. It’s been an exciting time to witness T-BIOM’s continued growth as a leading journal in biometrics research with a consistent rise in paper quality, thanks to the selection of top-reviewed papers from premier IEEE Biometrics Council conferences like IJCB, and IEEE Face and Gesture.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"288-288"},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10604473","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information 电气和电子工程师学会生物统计、行为和身份科学期刊》(IEEE Transactions on Biometrics, Behavior, and Identity Science)出版信息
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-07-18 DOI: 10.1109/TBIOM.2024.3399382
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2024.3399382","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3399382","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10604674","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors IEEE Transactions on Biometrics, Behavior, and Identity Science 给作者的信息
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-07-18 DOI: 10.1109/TBIOM.2024.3399383
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2024.3399383","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3399383","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10604655","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Motorized Lane Target Behavior Classification Based on Millimeter Wave Radar With P-Mrca Convolutional Neural Network 基于P-Mrca卷积神经网络的毫米波雷达非机动车道目标行为分类
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-07-15 DOI: 10.1109/TBIOM.2024.3428577
Jiaqing He;Yihan Zhu;Bing Hua;Zhihuo Xu;Yongwei Zhang;Liu Chu;Quan Shi;Robin Braun;Jiajia Shi
{"title":"Non-Motorized Lane Target Behavior Classification Based on Millimeter Wave Radar With P-Mrca Convolutional Neural Network","authors":"Jiaqing He;Yihan Zhu;Bing Hua;Zhihuo Xu;Yongwei Zhang;Liu Chu;Quan Shi;Robin Braun;Jiajia Shi","doi":"10.1109/TBIOM.2024.3428577","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3428577","url":null,"abstract":"In the fields of road regulation and road safety, the classification of target behaviors for non-motorized lanes is of great significance. However, due to the influence of adverse weather and lighting conditions on the recognition efficiency, we use radar to perform target recognition on non-motorized lanes to cope with the challenges caused by frequent traffic accidents on non-motorized lanes. In this paper, a classification and recognition method for non-motorized lane target behavior is proposed. Firstly, a radar data acquisition system is constructed to extract the micro-Doppler features of the target. Then, in view of the shortcomings of traditional deep learning networks, this paper proposes a multi-scale residual channel attention mechanism that can better perform multi-scale feature extraction and adds it to the convolutional neural network (CNN) model to construct a multi-scale residual channel attention network (MrcaNet), which can identify and classify target behaviors specific to non-motorized lanes. In order to better combine the feature information contained in the high-level features and the low-level features, MrcaNet was combined with the feature pyramid structure, and a more efficient network model feature pyramid-multi-scale residual channel attention network (P-MrcaNet) was designed. The results show that the model has the best scores on classification indexes such as accuracy, precision, recall rate, F1 value and Kappa coefficient, which are about 10% higher than traditional deep learning methods. The classification effect of this method not only performs well on this paper’s dataset, but also has good adaptability on public datasets.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"71-81"},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GaitAGE: Gait Age and Gender Estimation Based on an Age- and Gender-Specific 3D Human Model 步态:基于年龄和性别特定的3D人体模型的步态年龄和性别估计
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-07-08 DOI: 10.1109/TBIOM.2024.3424841
Xiang Li;Yasushi Makihara;Chi Xu;Yasushi Yagi
{"title":"GaitAGE: Gait Age and Gender Estimation Based on an Age- and Gender-Specific 3D Human Model","authors":"Xiang Li;Yasushi Makihara;Chi Xu;Yasushi Yagi","doi":"10.1109/TBIOM.2024.3424841","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3424841","url":null,"abstract":"Gait-based human age and gender estimation has potential applications in visual surveillance, such as searching for specific pedestrian groups and automatically counting customers by different ages/genders. Unlike most existing methods that exploit widely used appearance-based gait features (e.g., gait energy image and silhouettes) or simple model-based gait features (e.g., leg length, stride width/frequency, and head-to-body ratio), we explore a recently popular 3D human mesh model (i.e., skinned multi-person linear model (SMPL)), which is more robust to various covariates (e.g., view angles). Furthermore, instead of the commonly used gender-neutral SMPL model, we propose a simple yet effective method to generate more realistic age- and gender-specific human mesh models by interpolating among male, female, and infant SMPL models using two learned age and gender weights. The age weight controls the proportion of importance between male/female and infant models, which is learned in a data-driven scheme by considering the paired relation between ground-truth ages and age weights. The gender weight controls the proportion of importance between male and female models, which indicates the gender probability. Then, we explore the use of generated realistic mesh models for age and gender estimation. Finally, the human mesh reconstruction and age and gender estimation modules are integrated into a unified end-to-end framework for training and testing. The experimental results on the OU-MVLP and FVG datasets demonstrated that the proposed method achieved both good mesh reconstruction and state-of-the-art age and gender estimation results.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"47-60"},"PeriodicalIF":0.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10589467","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Deep Face Spoofing: Taxonomy, Recent Advances, and Open Challenges 走向深度人脸欺骗:分类、最新进展和公开挑战
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-06-20 DOI: 10.1109/TBIOM.2024.3417372
Dhimas Arief Dharmawan;Anto Satriyo Nugroho
{"title":"Toward Deep Face Spoofing: Taxonomy, Recent Advances, and Open Challenges","authors":"Dhimas Arief Dharmawan;Anto Satriyo Nugroho","doi":"10.1109/TBIOM.2024.3417372","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3417372","url":null,"abstract":"Deep neural networks are increasingly employed to create adversarial face images, aiming to deceive face recognition systems. While the majority of studies concentrate on digital attacks, their relevance extends to face spoofing. Notably, they have the capability to generate potential face images of victims when attackers lack knowledge about individuals registered in face recognition systems. Regrettably, recent advances in attacking face recognition systems using deep neural networks, their performance, and their transferability to physical attacks (deep face spoofing) lack systematic exploration. This paper addresses this gap by presenting the first comprehensive survey of current research in this domain. The review initiates with the definition of the deep face spoofing concept and introduces a pioneering taxonomy to systematically consolidate recent advances towards deep face spoofing. The main section of the paper provides in-depth evaluations of the mechanism, performance, and applicability of diverse deep neural network-based attacking algorithms against face recognition systems. Subsequently, the paper outlines current challenges in deep face spoofing, including the absence of evaluations of recent attacks against state-of-the-art face anti-spoofing algorithms and the limited transferability of recent digital attacks to physical attacks. This part also covers open challenges in deep face spoofing detection since it is crucial to note that studying various deep face spoofing algorithms should always be seen as an effort to investigate the vulnerability of face recognition systems against such evolved attacks, and not as an endeavor to gain access for illegal purposes. To enhance accessibility to a broad range of research papers in this area, an accompanying web page (\u0000<uri>https://github.com/dhimasarief/DFS_DFAS</uri>\u0000) has been established. This serves as a dynamic repository of studies focusing on deep face spoofing, continuously curated with new findings and contributions.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"16-32"},"PeriodicalIF":0.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-Shot Demographically Unbiased Image Generation From an Existing Biased StyleGAN 从现有有偏差的 StyleGAN 生成零镜头人口统计无偏图像
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-06-18 DOI: 10.1109/TBIOM.2024.3416403
Anubhav Jain;Rishit Dholakia;Nasir Memon;Julian Togelius
{"title":"Zero-Shot Demographically Unbiased Image Generation From an Existing Biased StyleGAN","authors":"Anubhav Jain;Rishit Dholakia;Nasir Memon;Julian Togelius","doi":"10.1109/TBIOM.2024.3416403","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3416403","url":null,"abstract":"Face recognition systems have made significant strides thanks to data-heavy deep learning models, but these models rely on large privacy-sensitive datasets. Recent work in facial analysis and recognition have thus started making use of synthetic datasets generated from GANs and diffusion based generative models. These models, however, lack fairness in terms of demographic representation and can introduce the same biases in the trained downstream tasks. This can have serious societal and security implications. To address this issue, we propose a methodology that generates unbiased data from a biased generative model using an evolutionary algorithm. We show results for StyleGAN2 model trained on the Flicker Faces High Quality dataset to generate data for singular and combinations of demographic attributes such as Black and Woman. We generate a large racially balanced dataset of 13.5 million images, and show that it boosts the performance of facial recognition and analysis systems whilst reducing their biases. We have made our code-base (\u0000<uri>https://github.com/anubhav1997/youneednodataset</uri>\u0000) public to allow researchers to reproduce our work.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"498-514"},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sclera-TransFuse: Fusing Vision Transformer and CNN for Accurate Sclera Segmentation and Recognition Sclera-TransFuse:融合视觉变换器和 CNN,实现准确的巩膜分割和识别
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-06-17 DOI: 10.1109/TBIOM.2024.3415484
Caiyong Wang;Haiqing Li;Yixin Zhang;Guangzhe Zhao;Yunlong Wang;Zhenan Sun
{"title":"Sclera-TransFuse: Fusing Vision Transformer and CNN for Accurate Sclera Segmentation and Recognition","authors":"Caiyong Wang;Haiqing Li;Yixin Zhang;Guangzhe Zhao;Yunlong Wang;Zhenan Sun","doi":"10.1109/TBIOM.2024.3415484","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3415484","url":null,"abstract":"This paper investigates a deep learning based unified framework for accurate sclera segmentation and recognition, named Sclera-TransFuse. Unlike previous CNN-based methods, our framework incorporates Vision Transformer and CNN to extract complementary feature representations, which are beneficial to both subtasks. Specifically, for sclera segmentation, a novel two-stream hybrid model, referred to as Sclera-TransFuse-Seg, is developed to integrate classical ResNet-34 and recently emerging Swin Transformer encoders in parallel. The dual-encoders firstly extract coarse- and fine-grained feature representations at hierarchical stages, separately. Then a Cross-Domain Fusion (CDF) module based on information interaction and self-attention mechanism is introduced to efficiently fuse the multi-scale features extracted from dual-encoders. Finally, the fused features are progressively upsampled and aggregated to predict the sclera masks in the decoder meanwhile deep supervision strategies are employed to learn intermediate feature representations better and faster. With the results of sclera segmentation, the sclera ROI image is generated for sclera feature extraction. Additionally, a new sclera recognition model, termed as Sclera-TransFuse-Rec, is proposed by combining lightweight EfficientNet B0 and multi-scale Vision Transformer in sequential to encode local and global sclera vasculature feature representations. Extensive experiments on several publicly available databases suggest that our framework consistently achieves state-of-the-art performance on various sclera segmentation and recognition benchmarks, including the 8th Sclera Segmentation and Recognition Benchmarking Competition (SSRBC 2023). A UBIRIS.v2 subset of 683 eye images with manually labeled sclera masks, and our codes are publicly available to the community through \u0000<uri>https://github.com/lhqqq/Sclera-TransFuse</uri>\u0000.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"575-590"},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WASD: A Wilder Active Speaker Detection Dataset 一个更大的主动说话人检测数据集
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-06-11 DOI: 10.1109/TBIOM.2024.3412821
Tiago Roxo;Joana Cabral Costa;Pedro R. M. Inácio;Hugo Proença
{"title":"WASD: A Wilder Active Speaker Detection Dataset","authors":"Tiago Roxo;Joana Cabral Costa;Pedro R. M. Inácio;Hugo Proença","doi":"10.1109/TBIOM.2024.3412821","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3412821","url":null,"abstract":"Current Active Speaker Detection (ASD) models achieve good results on cooperative settings with reliable face access using only sound and facial features, which is not suited for less constrained conditions. To demonstrate this limitation of current datasets, we propose a Wilder Active Speaker Detection (WASD) dataset, with increased difficulty by targeting the key components of current ASD: audio and face. Grouped into 5 categories, WASD contains incremental challenges for ASD with tactical impairment of audio and face data, and provides a new source for ASD via subject body annotations. To highlight the new challenges of WASD, we divide it into Easy (cooperative settings) and Hard (audio and/or face are specifically degraded) groups, and assess state-of-the-art models performance in WASD and in the most challenging available ASD dataset: AVA-ActiveSpeaker. The results show that: 1) AVA-ActiveSpeaker prepares models for cooperative settings but not wilder ones (surveillance); and 2) current ASD approaches can not reliably perform in wilder settings, even if trained with challenging data. To prove the importance of body for wild ASD, we propose a baseline that complements body with face and audio information that surpass state-of-the-art models in WASD and Columbia. All contributions are available at \u0000<uri>https://github.com/Tiago-Roxo/WASD</uri>\u0000.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"61-70"},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信