IEEE transactions on biometrics, behavior, and identity science最新文献

筛选
英文 中文
IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors IEEE Transactions on Biometrics, Behavior, and Identity Science 给作者的信息
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-11-25 DOI: 10.1109/TBIOM.2024.3459104
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2024.3459104","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3459104","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 4","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767129","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeePhyNet: Toward Detecting Phylogeny in Deepfakes 深度网络:对深度假体系统发育的检测
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-10-28 DOI: 10.1109/TBIOM.2024.3487482
Kartik Thakral;Harsh Agarwal;Kartik Narayan;Surbhi Mittal;Mayank Vatsa;Richa Singh
{"title":"DeePhyNet: Toward Detecting Phylogeny in Deepfakes","authors":"Kartik Thakral;Harsh Agarwal;Kartik Narayan;Surbhi Mittal;Mayank Vatsa;Richa Singh","doi":"10.1109/TBIOM.2024.3487482","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3487482","url":null,"abstract":"Deepfakes have rapidly evolved from their inception as a niche technology into a formidable tool for creating hyper-realistic manipulated content. With the ability to convincingly manipulate videos, images, and audio, deepfake technology can be used to create fake news, impersonate individuals, or even fabricate events, posing significant threats to public trust and societal stability. The technology has already been used to generate deepfakes for a number of the above-listed applications. Extending the complexities, this paper introduces the concept of deepfake phylogeny. Currently, multiple deepfake generation algorithms can also be used sequentially to create deepfakes in a phylogenetic manner. In such a scenario, deepfake detection, ingredient model signature detection, and phylogeny sequence detection performances have to be optimized. To address the challenge of detecting such deepfakes, we propose DeePhyNet, which performs three tasks: it first differentiates between real and fake content; it next determines the signature of the generative algorithm used for deepfake creation to determine which algorithm has been used for generation, and finally, it also predicts the phylogeny of algorithms used for generation. To the best of our knowledge, this is the first algorithm that performs all three tasks together for deepfake media analysis. Another contribution of this research is the DeePhyV2 database to incorporate multiple deepfake generation algorithms including recently proposed diffusion models and longer phylogenetic sequences. It consists of 8960 deepfake videos generated using four different generation techniques. The results on multiple protocols and comparisons with state-of-the-art algorithms demonstrate that the proposed algorithm yields the highest overall classification results across all three tasks.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"132-145"},"PeriodicalIF":0.0,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DensePoseGait: Dense Human Pose Part-Guided for Gait Recognition densepose步态:用于步态识别的密集人体姿势部分引导
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-10-28 DOI: 10.1109/TBIOM.2024.3486732
Rijun Liao;Zhu Li;Shuvra S. Bhattacharyya;George York
{"title":"DensePoseGait: Dense Human Pose Part-Guided for Gait Recognition","authors":"Rijun Liao;Zhu Li;Shuvra S. Bhattacharyya;George York","doi":"10.1109/TBIOM.2024.3486732","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3486732","url":null,"abstract":"Gait recognition is a technology that identifies human ID according to the human unique biometric gait feature. It has two popular categories, appearance-based and model-based algorithms. Appearance-based algorithms generally use human silhouettes as the initial input data. External factors such as clothing and physical carrying can drastically alter human silhouettes. In contrast, model-based algorithms tend to be more robust in regard to appearances, with human skeletons providing the initial input data in general. However, human skeletons suffer from limited information which causes an obstacle to increasing performance. In this paper, we, therefore, address this challenge by presenting two new databases, named CASIA-B-DensePose and MoBo-DensePose, which are based on the publicly available multiview database, CASIA-B and MoBo. They exploit UV coordinates of body surface and human semantic segmentation as the initial gait feature. It is less sensitive to human shape compared with human silhouettes, and has richer semantic information compared with human skeletons. In addition, we also introduce a novel model-based framework, DensePoseGait, to take full advantage of databases. Unlike traditional algorithms which either extract isolated local features or combine them with global features, DensePoseGait uses a novel way to exploit partial features. That is, human pose parts are employed as a regulator to guide the learning of global features in the training stage. Its core idea is to establish better representative features with the assistance of partial features, but not require additional calculation in the inference stage. We believe these databases and framework can offer researchers a fresh perspective on model-based gait recognition and inspire further exploration and advancements in this area.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 1","pages":"33-46"},"PeriodicalIF":0.0,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comprehensive Survey on Deep Gait Recognition: Algorithms, Datasets, and Challenges
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-10-25 DOI: 10.1109/TBIOM.2024.3486345
Chuanfu Shen;Shiqi Yu;Jilong Wang;George Q. Huang;Liang Wang
{"title":"A Comprehensive Survey on Deep Gait Recognition: Algorithms, Datasets, and Challenges","authors":"Chuanfu Shen;Shiqi Yu;Jilong Wang;George Q. Huang;Liang Wang","doi":"10.1109/TBIOM.2024.3486345","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3486345","url":null,"abstract":"Gait recognition aims to identify a person at a distance, serving as a promising solution for long-distance and less-cooperation pedestrian recognition. Recently, significant advances in gait recognition have achieved inspiring success in many challenging scenarios by utilizing deep learning techniques. Against the backdrop that deep gait recognition has achieved almost perfect performance in laboratory datasets, much recent research has introduced new challenges for gait recognition, including robust deep representation modeling, in-the-wild gait recognition, and even recognition from new visual sensors such as infrared and depth cameras. Meanwhile, the increasing performance of gait recognition might also reveal concerns about biometrics security and privacy prevention for society. We provide a comprehensive survey on recent literature using deep learning and a discussion on the privacy and security of gait biometrics. This survey reviews the existing deep gait recognition methods through a novel view based on our proposed taxonomy. The proposed taxonomy differs from the conventional taxonomy of categorizing available gait recognition methods into the model- or appearance-based methods, while our taxonomic hierarchy considers deep gait recognition from two perspectives: deep representation learning and deep network architectures, illustrating the current approaches from both micro and macro levels. We also include up-to-date reviews of datasets and performance evaluations on diverse scenarios. Finally, we introduce privacy and security concerns on gait biometrics and discuss outstanding challenges and potential directions for future research.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 2","pages":"270-292"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure Representation With Adaptive and Compact Facial Graph for Micro-Expression Recognition
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-10-14 DOI: 10.1109/TBIOM.2024.3479333
Chunlei Li;Renwei Ba;Xueping Wang;Miao Yu;Xiao Li;Di Huang
{"title":"Structure Representation With Adaptive and Compact Facial Graph for Micro-Expression Recognition","authors":"Chunlei Li;Renwei Ba;Xueping Wang;Miao Yu;Xiao Li;Di Huang","doi":"10.1109/TBIOM.2024.3479333","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3479333","url":null,"abstract":"The subtle and slight motions of micro-expressions (MEs) leave few effective features to micro-expression recognition (MER), making MER a challenging task. Existing works mainly focus on constructing strong representations from entire videos, individual frames, or redundant structural graphs, however, spatial structure feature learning of MEs leaves much space for further improvement. To solve the issue, this paper introduces a novel two-stream network for MER without any prior knowledge called Focusing on Few Discriminative Information Network (FFDIN). Specifically, in the temporal stream, the difference between the Apex and Onset frames is utilized as input to reduce redundant information and aggregate temporal information. Meanwhile, spatial attention is incorporated into the CNN stream to encourage the network to focus on salient features. In the structural stream, the Adaptively Select Strategy (ADSS) is proposed to automatically locate few effective regions of MEs by selecting the strong long-term dependent cropped patches and corresponding adjacency matrix. Then, the Graph Nodes Generation (GNG) module is designed to capture local and global information in tiny cropped patches and project the feature maps into graph nodes. Extensive experiments conducted on the CASME II, SAMM, and SMIC datasets demonstrate that the proposed network can achieve superior performance than the state-of-the-art methods.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 2","pages":"256-269"},"PeriodicalIF":0.0,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Deep Metric Learning-Based State-Stable and Noise-Aware Biometric Authentication Framework Using Seismocardiogram Signals
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-10-11 DOI: 10.1109/TBIOM.2024.3478373
Arka Roy;Udit Satija
{"title":"A Novel Deep Metric Learning-Based State-Stable and Noise-Aware Biometric Authentication Framework Using Seismocardiogram Signals","authors":"Arka Roy;Udit Satija","doi":"10.1109/TBIOM.2024.3478373","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3478373","url":null,"abstract":"Biometric authentication based on different physiological signals has attracted significant attention in the last decade due to advancements in wearable sensors and communication technologies apart from the traditional ways of recognition based on fingerprint, face. Recently, researchers have been allured by photoplethysmograph (PPG)-based biometric authentication owing to its non-invasiveness, low cost, and no use of adhesive, unlike widely used electrocardiogram (ECG)-based authentication. However, the identification accuracy (IA) severely deteriorates due to frequent motion artifacts. Further, it poses security issues due to few fiducial points and the compromise of live video of the subject in video-based PPG. Recently, few researchers have explored the use of seismocardiogram (SCG), another mechanical cardiac signal modality, for biometric authentication. However, these methods are unable to extract state-stable embeddings which impact the IA. To overcome, these issues, we propose a deep metric learning-based biometric authentication framework using SCGs. The proposed framework consists of the following stages: pre-processing, mel-spectrogram extraction, subject-specific-state-stable feature extraction using parameter-shared triplet neural network, embedding dictionary construction, and authentication using an intelligent cosine similarity-based authentication module. The proposed framework is evaluated using the only publicly available CEBS dataset under basal, music, and post-music states, and outperforms the existing works by achieving an IA and equal error rate (EER) of 99.79%, and 0.42%.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 2","pages":"246-255"},"PeriodicalIF":0.0,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OASG-Net: Occlusion Aware and Structure-Guided Network for Face De-Occlusion
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-10-09 DOI: 10.1109/TBIOM.2024.3476947
Yuewei Fu;Buyun Liang;Zhongyuan Wang;Baojin Huang;Tao Lu;Chao Liang;Jing Liao
{"title":"OASG-Net: Occlusion Aware and Structure-Guided Network for Face De-Occlusion","authors":"Yuewei Fu;Buyun Liang;Zhongyuan Wang;Baojin Huang;Tao Lu;Chao Liang;Jing Liao","doi":"10.1109/TBIOM.2024.3476947","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3476947","url":null,"abstract":"During the COVID-19 coronavirus epidemic, almost everyone wears a facial mask, which poses a huge challenge for face recognition. Therefore, it is urgent to improve the performance of face de-occlusion of masked faces. However, previous inpainting approaches are limited as they require the knowledge of a given mask, and in the past there was also a lack of real-world masked face datasets suitable for the face de-occlusion task. To tackle above issues, we pioneer a real-world masked face de-occlusion dataset (RMFDD) with accurate mask labels. Further, we propose an occlusion aware and structure-guided network (OASG-Net) for face de-occlusion, consisting of mask prediction subnet, structure prediction subnet, and face de-occlusion subnet. In particular, due to the mask prediction subnet, OASG-Net achieves face de-occlusion without a given external mask. We also use face structure to guide OASG-Net, which makes the recovered face topology more natural and realistic. Besides, we design the mask aware layer to avoid the hard 0-1 mask updating of partial convolution in the face de-occlusion subnet. Extensive results on both face de-occlusion and face recognition tasks demonstrate the superiority of our OASG-Net over the state-of-the-art competitors. Code is available at <uri>https://github.com/WHUfreeway/OASG-Net-Occlusion-Aware-and-Structure-Guided-Network-for-Face-De-Occlusion</uri>.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 2","pages":"234-245"},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Out-of-Distribution Representation and Graph Neural Network Fusion Learning for ECG Biometrics
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-09-30 DOI: 10.1109/TBIOM.2024.3470232
Tianbang Ma;Yuwen Huang;Ran Yi;Gongping Yang;Yilong Yin
{"title":"Out-of-Distribution Representation and Graph Neural Network Fusion Learning for ECG Biometrics","authors":"Tianbang Ma;Yuwen Huang;Ran Yi;Gongping Yang;Yilong Yin","doi":"10.1109/TBIOM.2024.3470232","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3470232","url":null,"abstract":"The electrocardiogram (ECG) signal, a promising trait in biometrics, has been extensively studied. While the deep learning-based model has demonstrated strong performance for ECG biometrics, several challenges remain, including efficient extraction of 1D signals’ topological properties and efficient use of signal distribution information at different times. Another challenge is adapting to the critical role of specific deep neural networks. To address these issues, this study proposes an out-of-distribution representation and graph neural network fusion learning (ORGNNFL) method for ECG biometrics. The ORGNNFL is mainly composed of a two-branch deep learning model for ECG biometrics, capable of learning to discriminate features from latent distributions and extracting topological features of 1D ECG signals. The multi-feature attention module is also proposed to adaptively capture valuable information from two-branch deep learning data for ECG biometrics. Experiments conducted on the four databases demonstrate that our method outperforms the state-of-the-art techniques, paving the way for a new era in ECG biometrics. Code is available at <uri>https://github.com/matianbang/ORGNNFL</uri>.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 2","pages":"225-233"},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Step Count Print: A Physical Activity-Based Biometric Identifier for User Identification and Authentication
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-09-24 DOI: 10.1109/TBIOM.2024.3466269
Zhen Chen;Keqin Shi;Weiqiang Sun
{"title":"Step Count Print: A Physical Activity-Based Biometric Identifier for User Identification and Authentication","authors":"Zhen Chen;Keqin Shi;Weiqiang Sun","doi":"10.1109/TBIOM.2024.3466269","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3466269","url":null,"abstract":"Step count is one of the most widely used physical activity data and is easily accessible through smart phones and wearable devices. It records the intensity and happening time of a user’s physical activities, and often reflects a users’ unique way of living. Incorporation of step count into biometric systems may thus offer an opportunity to develop innovative, user-friendly and non-invasive strategies of user identification and authentication. In this paper, we propose Step Count Print (SCP), a physical activity-based novel biometric identifier. Extracted from coarse-grained minute-level physical activity data (step counts), SCP contains features, including user step cadence distribution and average step distribution etc., that reflect an individual’s physical activity behavior. With data collected from 100 users in a five-year long period, we conducted an ablation study to demonstrate the non-redundancy of SCP in user identification and authentication scenarios using commonly used machine learning algorithms. The results show that SCP can achieve a Rank-1 rate of up to 75.0% in user identification scenarios and an average accuracy of 92.3% in user authentication scenarios. In different classification algorithms, the user’s accuracy histogram is drawn to demonstrate the universality of SCP and its effectiveness across a range of scenarios and use cases.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 2","pages":"210-224"},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Multi-Scale Knowledge-Guided Features for Text-Guided Face Recognition
IEEE transactions on biometrics, behavior, and identity science Pub Date : 2024-09-23 DOI: 10.1109/TBIOM.2024.3466216
Md Mahedi Hasan;Shoaib Meraj Sami;Nasser M. Nasrabadi;Jeremy Dawson
{"title":"Learning Multi-Scale Knowledge-Guided Features for Text-Guided Face Recognition","authors":"Md Mahedi Hasan;Shoaib Meraj Sami;Nasser M. Nasrabadi;Jeremy Dawson","doi":"10.1109/TBIOM.2024.3466216","DOIUrl":"https://doi.org/10.1109/TBIOM.2024.3466216","url":null,"abstract":"Text-guided face recognition (TGFR) aims to improve the performance of state-of-the-art face recognition (FR) algorithms by incorporating auxiliary information, such as distinct facial marks and attributes, provided as natural language descriptions. Current TGFR algorithms have been proven to be highly effective in addressing performance drops in state-of-the-art FR models, particularly in scenarios involving sensor noise, low resolution, and turbulence effects. Although existing methods explore various algorithms using different cross-modal alignment and fusion techniques, they encounter practical limitations in real-world applications. For example, during inference, textual descriptions associated with face images may be missing, lacking crucial details, or incorrect. Furthermore, the presence of inherent modality heterogeneity poses a significant challenge in achieving effective cross-modal alignment. To address these challenges, we introduce CaptionFace, a TGFR framework that integrates GPTFace, a face image captioning model designed to generate context-rich natural language descriptions from low-resolution facial images. By leveraging GPTFace, we overcome the issue of missing textual descriptions, expanding the applicability of CaptionFace to single-modal FR datasets. Additionally, we introduce a multi-scale feature alignment (MSFA) module to ensure semantic alignment between face-caption pairs at different granularities. Furthermore, we introduce an attribute-aware loss and perform knowledge adaptation to specifically adapt textual knowledge from facial features. Extensive experiments on three face-caption datasets and various unconstrained single-modal benchmark datasets demonstrate that CaptionFace significantly outperforms state-of-the-art FR models and existing TGFR approaches.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 2","pages":"195-209"},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信