{"title":"MTSNet: Convolution-based Transformer Network with Multi-scale Temporal-Spectral Feature Fusion for SSVEP Signal Decoding.","authors":"Zhen Lan, Zixing Li, Chao Yan, Xiaojia Xiang, Dengqing Tang, Min Wu, Zhenghua Chen","doi":"10.1109/JBHI.2025.3573410","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3573410","url":null,"abstract":"<p><p>Improving the decoding performance of steady-state visual evoked (SSVEP) signals is crucial for the practical application of SSVEP-based brain-computer interface (BCI) systems. Although numerous methods have achieved impressive results in decoding SSVEP signals, most of them focus only on the temporal or spectral domain information or concatenate them directly, which may ignore the complementary relationship between different features. To address this issue, we propose a dual-branch convolution-based Transformer network with multi-scale temporal-spectral feature fusion, termed MTSNet, to improve the decoding performance of SSVEP signals. Specifically, the temporal branch extracts temporal features from the SSVEP signals using the multi-level convolution-based Transformer (Convformer) that can adapt to the dynamic fluctuations of SSVEP signals. In parallel, the spectral branch takes the complex spectrum converted from temporal signals by the zero-padding fast Fourier transform as input and uses the Convformer to extract spectral features. These extracted temporal and spectral features are then integrated by the multi-scale feature fusion module to obtain comprehensive features with different scale information, thereby enhancing the interactions between the features and improving the effectiveness and robustness. Extensive experimental results on two widely used public SSVEP datasets, Benchmark and BETA, show that the proposed MTSNet significantly outperforms the state-of-the-art calibration-free methods in terms of accuracy and ITR. The superior performance demonstrates the effectiveness of our method in decoding SSVEP signals, which may facilitate the practical application of SSVEP-based BCI systems.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144132376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei-Lun Huang, Joshua Liu, Davood Tashayyod, Jun Kang, Amir Gandjbakhche, Misha Kazhdan, Mehran Armand
{"title":"A Shape-Aware Total Body Photography System for In-focus Surface Coverage Optimization.","authors":"Wei-Lun Huang, Joshua Liu, Davood Tashayyod, Jun Kang, Amir Gandjbakhche, Misha Kazhdan, Mehran Armand","doi":"10.1109/JBHI.2025.3573150","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3573150","url":null,"abstract":"<p><p>Total Body Photography (TBP) is becoming a useful screening tool for patients at high risk for skin cancer. While much progress has been made, existing TBP systems can be further improved for automatic detection and analysis of suspicious skin lesions, which is in part related to the resolution and sharpness of acquired images. This paper proposes a novel shape-aware TBP system automatically capturing full-body images while optimizing image quality in terms of resolution and sharpness over the body surface. The system uses depth and RGB cameras mounted on a 360-degree rotary beam, along with 3D body shape estimation and an in-focus surface optimization method to select the optimal focus distance for each camera pose. This allows for optimizing the focused coverage over the complex 3D geometry of the human body given the calibrated camera poses. We evaluate the effectiveness of the system in capturing high-fidelity body images. The proposed system achieves an average resolution of 0.068 mm/pixel and 0.0566 mm/pixel with approximately 85% and 95% of surface area in-focus, evaluated on simulation data of diverse body shapes and poses as well as a real scan of a mannequin respectively. Furthermore, the proposed shape-aware focus method outperforms existing focus protocols (e.g. auto-focus). We believe the high-fidelity imaging enabled by the proposed system will improve automated skin lesion analysis for skin cancer screening.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144132362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-grained Line Graph Neural Network with Hierarchical Contrastive Learning for Predicting Drug-disease Associations.","authors":"Bao-Min Liu, Ling-Yun Dai, Junliang Shang, Chun-Hou Zheng, Ying-Lian Gao, Rui Gao, Jin-Xing Liu","doi":"10.1109/JBHI.2025.3573158","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3573158","url":null,"abstract":"<p><p>Predicting drug-disease associations is a crucial step in drug repositioning, especially with computational methods that quickly locate potential drug-disease pairs. Heterogenous network is a common tool for introducing multiple type relation information about drugs and diseases. However, the diversity of relations is ignored in most of existing methods, which makes them difficult to explore type semantic information with structure properties. Therefore, we propose a relation-centric GNN framework to encode critical association patterns. Firstly, we utilize a relation-centric graph, line graph, to represent the context of a drug-disease pair identified as the center node. The prediction problem is modeled to learn the embedding vector of the center node. Secondly, a multi-grained line graph neural network (MGLGNN) is designed to excavate fine-grained features that encapsulate local graph structures. We theoretically define a handful of typical nodes that can be regarded as high-order abstractions of relations in each type. Then, MGLGNN distills the local information and passes it to typical nodes from a global perspective. With learned multi-grained features, the center node automatically captures heterogenous relation semantics and structure patterns. Thirdly, a hierarchical contrastive learning (HCL) mechanism is proposed to ensure the quality of multi-grained features in an unsupervised way. Extensive experiments show the great potential of our model in mining drug-disease associations.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144132377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zabeeh Ullah, Fahim Arif, Nauman Ali Khan, Mudassar Ali Khan, Ikram Ud Din, Ahmad Almogren, Ayman Altameem
{"title":"Interpretable and Adaptive GAN-BiLSTM Approach for Cyber Threat Detection in IoMT-based Healthcare 5.0.","authors":"Zabeeh Ullah, Fahim Arif, Nauman Ali Khan, Mudassar Ali Khan, Ikram Ud Din, Ahmad Almogren, Ayman Altameem","doi":"10.1109/JBHI.2025.3573097","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3573097","url":null,"abstract":"<p><p>Healthcare 5.0, driven by the Internet of Medical Things (IoMT), introduces transformative changes in the medical field but also exposes systems to growing cybersecurity threats. While Deep Learning (DL) offers high accuracy in attack detection, its effectiveness is often limited by data imbalance and difficulty in identifying key features dynamically. Additionally, DL models are often criticized for their lack of interpretability, as their internal decisionmaking remains obscure. To overcome these limitations, this paper presents an explainable and adaptive DL-based security framework. It integrates a Generative Adversarial Network (GAN) to balance the dataset by generating realistic samples for underrepresented attack classes, and employs Bidirectional Long Short-Term Memory (BiLSTM) to identify temporal patterns and critical features. To enhance transparency, SHapley Additive exPlanations (SHAP) and Permutation Feature Importance (PFI) are used for interpreting the model's decisions. Experiments conducted on the NSL-KDD dataset demonstrate the effectiveness of the proposed method, achieving 93.81% accuracy and an F1-score of 82.95%.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144132365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-Time Implementation of Accelerated HCP-MMA for Deep Learning-Based ECG Arrhythmia Classification Using Contour-Based Visualization.","authors":"Basab Bijoy Purkayastha, Shovan Barma","doi":"10.1109/JBHI.2025.3572376","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3572376","url":null,"abstract":"<p><p>This study presents a real-time implementation of an accelerated Hurst Contour Projection from Multiscale Multifractal Analysis (HCP-MMA) for deep learning-based ECG arrhythmia classification. Traditional heart rate variability analyses rely on fixed time scales and predefined parameters, limiting their ability to capture intricate scaling patterns and leading to diagnostic inconsistencies. HCP-MMA converts complex multifractal properties into a contour-based representation, enhancing interpretability for automated classification. However, the high computational cost of MMA hinders real-time processing. To address this, a runtime-optimized parallel computing pipeline is introduced, incorporating singular value decomposition (SVD) and vectorized processing, achieving a $730times$ speedup over the baseline implementation on an Intel-based system. The proposed HCP-MMA framework, integrated with AlexNet, achieved over 98% classification accuracy across three benchmark datasets (PhysioNet, MIT-BIH, CU), with an F1-score of up to 99.3%. Runtime optimizations enabled real-time deployment on Raspberry Pi 5, demonstrating a $sim 199times$ speedup over baseline MMA computation on embedded hardware, with an average inference time of 0.0668 seconds per image, a memory footprint of approximately 220 MB, and a model size of $sim 122$ MB. Statistical validation using ANOVA and Tukey's HSD tests (p $< 0.05$) confirmed the approach's robustness and generalizability. By bridging computational efficiency with real-time adaptability, this method not only advances automated ECG diagnostics but also paves the way for scalable deployment in wearable monitoring, telemedicine, and multifractal analysis of complex physiological time-series.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144127582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yunhe Wang, Zhengyu Du, Xiaomin Li, Wenyuan Xiao, Hongpu Liu, Liang Yang
{"title":"Evolving Dual-directional Multiobjective Feature Selection for High-dimensional Gene Expression Data.","authors":"Yunhe Wang, Zhengyu Du, Xiaomin Li, Wenyuan Xiao, Hongpu Liu, Liang Yang","doi":"10.1109/JBHI.2025.3572310","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3572310","url":null,"abstract":"<p><p>High-dimensional gene expression data has gained considerable attention in diverse medical fields such as disease diagnosis, with the challenges of the dimensionality curse and exponentially growing computation. To analyze the data, feature selection is an essential step by reducing the dimensionality. However, most feature selection algorithms for high-dimensional gene expression data still suffer from low classification and poor generalization ability. An evolutionary algorithm is an effective paradigm for enhancing global search capability in feature selection. Inspired by the evolutionary algorithm Competitive Swarm Optimization, we propose a Multiobjective Dual-directional Competitive Swarm Optimization (MODCSO) method for feature selection from high-dimensional gene expression data. First, we design a competitive swarm optimization algorithm framework based on multi-objective optimization to evolve three objective functions simultaneously. Then, we introduce a dual-directional learning strategy that trains particles within the loser group using two distinct learning strategies. To assess the effectiveness and efficiency of the suggested algorithm, we evaluate MODCSO through extensive experiments on twenty high-dimensional gene expression datasets and three real-world biological datasets. Compared to various leading feature selection algorithms, our proposed algorithm MODCSO exhibits superior competitiveness for the high-dimensional feature selection task. Moreover, we provide other extensive analyses to demonstrate further the robustness and biological interpretability of MODCSO in handling high-dimensional gene expression data.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144127546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GCNLA: Inferring Cell-Cell Interactions From Spatial Transcriptomics With Long Short-Term Memory and Graph Convolutional Networks.","authors":"Chao Yang, Xiuhao Fu, Zhenjie Luo, Leyi Wei, Jingbing Li, Feifei Cui, Quan Zou, Qingchen Zhang, Zilong Zhang","doi":"10.1109/JBHI.2025.3572383","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3572383","url":null,"abstract":"<p><p>Spatial transcriptomics analysis methods offer an opportunity to investigate highly diverse biological tissues. Cell-cell communication is fundamental for maintaining physiological homeostasis in organisms and coordinating complex biological processes. Identifying cell-cell interactions is critical for understanding cellular activities. The interaction of a cell with other cells depends on several factors, and most of the existing methods that consider only gene expression information of neighbouring cells and spatial location information are somewhat limited. In this paper, we propose a network architecture based on graph convolution network and long short-term memory attention module-GCNLA, which contains graph convolution layer, long short-term memory network, attention module, and residual connections. GCNLA not only learns the spatial structure of cells but also captures interaction information between distal cells, the attention module further extracting and enhancing features related to cell-cell interactions. Finally, the inner product decoding calculates the cosine similarity, which is used to infer cell-cell interactions. In addition, GCNLA is capable of reconstructing the complete cell-cell interaction network. The experimental results on seqFISH and MERFISH demonstrate that the GCNLA network structure has better robustness and noise immunity. The potential features learned by GCNLA enable other downstream analyses, including single-cell resolution cell clustering based on spatial information resolving cell heterogeneity.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144127552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tuo Liu, Xuan Zhang, Xiuzhu Ma, Shuang Chen, Xuejuan Wang, Ping Zhou, Yang Chen, Guangquan Zhou, Faqin Lv
{"title":"DAM: Degradation-aware Model for Ultrasound Image Quality Assessment.","authors":"Tuo Liu, Xuan Zhang, Xiuzhu Ma, Shuang Chen, Xuejuan Wang, Ping Zhou, Yang Chen, Guangquan Zhou, Faqin Lv","doi":"10.1109/JBHI.2025.3572459","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3572459","url":null,"abstract":"<p><p>One of the core challenges in ultrasound image quality assessment (IQA) is the entanglement of semantic content and quality-related information, such as blurring and shadows. Insufficient attention to the latter can easily lead to biased IQA results. Furthermore, fine-grained quality inconsistencies, i.e., subtle variations in ultrasound images that can impact quality interpretations, may further complicate the IQA tasks. To address these challenges, we propose a novel degradation-aware model (DAM) for the ultrasound IQA, which effectively perceives various and subtle variations of quality patterns, accurately assessing the quality of ultrasound images. The advanced degradation-derived augmentation (DDA) in DAM incorporates degradations that clinicians may focus on during IQA into the synthesis of appearance changes, promoting the disentanglement of quality-related representations from semantic contents. Subsequently, we present fine-grained degradation learning (FGDL), which encourages distinctions between image versions with diminishing quality inconsistencies, boosting the awareness of quality nuances from easy to hard for better ultrasound IQA performance. A universal boundary acquisition operator (UBAO) is also developed to suppress interferences from redundant information, achieving the standardization of ultrasound images from various devices. Extensive experimental results on an in-house ultrasound dataset demonstrate that DAM outperforms 14 baseline methods, achieving a PLCC of 0.760 and an SROCC of 0.766. The code can be available at this URL.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144127543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-Scale Texture Supplementation for Reference-based Medical Image Super-Resolution.","authors":"Yinghua Li, Weiao Hao, Hao Zeng, Longguang Wang, Jian Xu, Sidheswar Routray, Rutvij H Jhaveri, Thippa Reddy Gadekallu","doi":"10.1109/JBHI.2025.3572502","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3572502","url":null,"abstract":"<p><p>Magnetic Resonance Imaging (MRI) is a widely used medical imaging technique, but its resolution is often limited by acquisition time constraints, potentially compromising diagnostic accuracy. Reference-based Image Super-Resolution (RefSR) has shown promising performance in addressing such challenges by leveraging external high-resolution (HR) reference images to enhance the quality of low-resolution (LR) images. The core objective of RefSR is to accurately establish correspondences between the reference HR image and the LR images. In pursuit of this objective, this paper develops a Self-rectified Texture Supplementation network for RefSR (STS-SR) to enhance fine details in MRI images and support the expanding role of autonomous AI in healthcare. Our network comprises a texture-specified selfrectified feature transfer module and a cross-scale texture complementary network. The feature transfer module employs highfrequency filtering to facilitate the network concentrating on fine details. To better exploit the information from both the reference and LR images, our cross-scale texture complementary module incorporates the All-ViT and Swin Transformer layers to achieve feature aggregation at multiple scales, which enables high-quality image enhancement that is critical for autonomous AI systems in healthcare to make accurate decisions. Extensive experiments are performed across various benchmark datasets. The results validate the effectiveness of our method and demonstrate that the method produces state-of-the-art performance as compared to existing approaches. This advancement enables autonomous AI systems to utilize high-quality MRI images for more accurate diagnostics and reliable predictions.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144127540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qi Zhang, Cheng Chuang, Shunan Zhang, Ziqi Zhao, Kun Wang, Jun Xu, Jianqi Sun
{"title":"HealthiVert-GAN: A Novel Framework of Pseudo-Healthy Vertebral Image Synthesis for Interpretable Compression Fracture Grading.","authors":"Qi Zhang, Cheng Chuang, Shunan Zhang, Ziqi Zhao, Kun Wang, Jun Xu, Jianqi Sun","doi":"10.1109/JBHI.2025.3572458","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3572458","url":null,"abstract":"<p><p>Osteoporotic vertebral compression fractures (OVCFs) are prevalent in the elderly population, typically assessed on computed tomography (CT) scans by evaluating vertebral height loss. This assessment helps determine the fracture's impact on spinal stability and the need for surgical intervention. However, the absence of pre-fracture CT scans and standardized vertebral references leads to measurement errors and inter-observer variability, while irregular compression patterns further challenge the precise grading of fracture severity. While deep learning methods have shown promise in aiding OVCFs screening, they often lack interpretability and sufficient sensitivity, limiting their clinical applicability. To address these challenges, we introduce a novel vertebra synthesis-height loss quantification-OVCFs grading framework. Our proposed model, HealthiVert-GAN, utilizes a coarse-to-fine synthesis network designed to generate pseudo-healthy vertebral images that simulate the pre-fracture state of fractured vertebrae. This model integrates three auxiliary modules that leverage the morphology and height information of adjacent healthy vertebrae to ensure anatomical consistency. Additionally, we introduce the Relative Height Loss of Vertebrae (RHLV) as a quantification metric, which divides each vertebra into three sections to measure height loss between pre-fracture and post-fracture states, followed by fracture severity classification using a Support Vector Machine (SVM). Our approach achieves state-of-the-art classification performance on both the Verse2019 dataset and in-house dataset, and it provides cross-sectional distribution maps of vertebral height loss. This practical tool enhances diagnostic accuracy in clinical settings and assisting in surgical decision-making.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144127558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}