Wenqi Hu, Xuerui Su, Guanliang Li, Yidi Pan, Aijing Lin
{"title":"Learning High-Order Relationships with Hypergraph Attention-based Spatio-Temporal Aggregation for Brain Disease Analysis.","authors":"Wenqi Hu, Xuerui Su, Guanliang Li, Yidi Pan, Aijing Lin","doi":"10.1109/JBHI.2026.3690795","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3690795","url":null,"abstract":"<p><p>Functional connectivity derived from functional magnetic resonance imaging (fMRI) primarily captures pairwise interactions between brain regions, which limits its ability to characterize complex high-order relationships. Hypergraph-based methods provide a natural way to model such interactions, yet most existing approaches rely on predefined hypergraph structures and neglect temporal dynamics, resulting in limited expressiveness and interpretability. To address these challenges, we propose a novel framework that jointly learns informative and sparse high-order brain structures along with their temporal dynamics. Inspired by the information bottleneck principle, we introduce an objective that maximizes information and minimizes redundancy, aiming to retain disease-relevant high-order features while suppressing irrelevant information. The proposed framework consists of three key components: (1) a multi-hyperedge binary mask module for hypergraph structure learning, (2) a hypergraph self-attention aggregation module that captures spatial features through adaptive attention across nodes and hyperedges, and (3) a spatio-temporal low-dimensional network for extracting discriminative spatio-temporal representations for disease classification. Experiments on benchmark fMRI datasets demonstrate that our method achieves competitive performance compared to the state-of-the-art approaches and effectively captures meaningful high-order brain interactions. These findings provide new insights into brain network modeling, showing potential for analyzing neuropsychiatric disorders.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Explainable Liquid Time-Constant Network for Multi-Modal Fatigue Detection in Healthcare 4.0.","authors":"Xu Xu, Ghulam Muhammad","doi":"10.1109/JBHI.2026.3690611","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3690611","url":null,"abstract":"<p><p>Driver fatigue detection has become increasingly critical for healthcare 4.0 systems, as it enables real-time monitoring of internal cognitive states to ensure road safety. However, existing methods often suffer from two critical limitations, i.e., insufficient modeling of time-varying dynamics, and ineffective fusion of multi-modal signals due to neglect of intra- and inter-modality dependencies. To address these challenges, we propose an explainable AI (XAI) framework, named LTC-DFD, for multi-modal driving fatigue detection. It is composed of five parallel branches to process distinct physiological modalities, each equipped with a Liquid Time-Constant block to model temporal dynamics using trainable differential equations. A dual-level attention mechanism is introduced, combining channel attention to emphasize salient intra-modal features and token-level attention to capture cross-modal dependencies. The fused representation is then passed through a fully connected regression head to estimate the driver's fatigue level. We evaluate LTC-DFD on the SEED-VIG dataset under a cross-subject protocol. It achieves an accuracy of 96.5%, RMSE of 0.22, and parameter count of only 0.42 M, demonstrating superior performance over existing state-of-the-art (SOTA) methods. In addition, the learned temporal dynamics and attention patterns are consistent with known neurophysiological markers of drowsiness, supporting trustworthy deployment of LTC-DFD in healthcare 4.0 driver-monitoring services.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147836984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xu Wang, Zhaoshui He, Zhijie Lin, Yang Han, Shengli Xie
{"title":"Wavelet-Transformer Attention Network for Accurate Fetal ECG Estimation from Multi-Channel Abdominal Signals.","authors":"Xu Wang, Zhaoshui He, Zhijie Lin, Yang Han, Shengli Xie","doi":"10.1109/JBHI.2026.3690589","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3690589","url":null,"abstract":"<p><p>Accurate fetal electrocardiogram extraction from abdominal recordings remains challenging due to strong maternal electrocardiogram artifacts and low signal quality. To address these issues, a Wavelet-Transformer Attention Network (WTA-Net) is proposed for fetal electrocardiogram extraction, where the Cross-Attention Transformer (CAT) module is devised to suppress maternal interference by modeling cross-modal interactions, and the Residual Shrinkage (RS) module is designed to attenuate noise artifact through adaptive thresholding. Validation findings reveal that the proposed WTA-Net outperforms state-of-the-art methods, achieving positive predictive values of 99.82% and 99.87% for fetal QRS detection on the ADFECGDB and B2_LABOUR databases, respectively, further enhancing the reliability of prenatal monitoring.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FedTFT: Federated Temporal Fusion Transformer for Interpretable Multi-Horizon Psychiatric Risk Prediction Across Cross-Silo Hospitals.","authors":"Akeel Ahamed, Ri-Ra Kang, KangYoon Lee","doi":"10.1109/JBHI.2026.3690452","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3690452","url":null,"abstract":"<p><p>Psychiatric inpatient monitoring generates multimodal data, but privacy constraints and cross-hospital heterogeneity limit centralized learning. We propose FedTFT, a federated Temporal Fusion Transformer for multi-horizon psychiatric risk prediction with horizon-decoupled prediction heads for one-hour, one-day, and one-week forecasting and an area under the receiver operating characteristic curve (AUROC)-weighted server aggregation strategy for non-independent and identically distributed hospital data. Each horizon uses its own linear output head to reduce cross-horizon gradient interference, while local training uses proximal updates. We trained and evaluated the model on 246 patients from three South Korean hospitals without sharing raw records. On the global holdout set, FedTFT achieved 93.9% accuracy, AUROC 0.9054, event-F1 0.8242, and Brier score 0.0680. Under matched federated settings, FedTFT improved event-F1 by 19.58 percentage points(PP) over the best competing federated baseline in event-F1 and improved AUROC by 2.74 pp over the highest-AUROC competing federated baseline, while maintaining the lowest Brier score. Ablations confirmed contributions from both the horizon-decoupled design and the AUROC-weighted aggregation strategy. Gradient SHapley Additive exPlanations (SHAP) analysis identified significant predictors such as treatment time, circadian heart-rate fluctuations, and mobility changes. These findings support accurate, calibrated, and interpretable privacy-preserving psychiatric risk forecasting for proactive intervention.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147836987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dong Liu, Wenhui Li, Ning Xu, Guoge Han, Rui Hao, Xianzhu Liu, An-An Liu
{"title":"SpineVLM: A Markdown-Guided Structured Fine-Tuning Framework for Spine X-ray Report Generation.","authors":"Dong Liu, Wenhui Li, Ning Xu, Guoge Han, Rui Hao, Xianzhu Liu, An-An Liu","doi":"10.1109/JBHI.2026.3689568","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3689568","url":null,"abstract":"<p><p>Automated medical report generation in specialized fields like spine radiography is constrained by data scarcity and high annotation costs. Consequently, existing multimodal large language models (MLLMs) struggle in these settings, often missing minute, scattered spinal abnormalities. We introduce SpineVLM, a data-efficient framework for structured spine X-ray report generation. The framework is built upon the newly constructed SXRG dataset, comprising 10,468 image-report pairs developed via a hierarchical AI-assisted annotation pipeline. To optimize learning under limited data, we propose Markdown-Guided Structured Learning (MGSL), which reformulates unconstrained free-text synthesis into a structured completion task, acting as a strong regularizer. Furthermore, an unsupervised Region-Focused Inference (RFI) module powered by foundation models (DINOv2) isolates the vertebral column to enhance the perception of subtle lesions without requiring manual spatial annotations. Evaluated on a 7B-parameter vision-language backbone, SpineVLM achieves strong performance against ten baseline multimodal models across standard linguistic metrics. In a double-blind reader study, the system achieved a diagnostic F1-score of 0.866, comparable to specialist performance, while reducing clinical reporting time by over 41%. By open-sourcing the dataset and codebase, we provide, to our knowledge, the first quantitative benchmark for automated spine radiography report generation, together with a structured framework for this data-limited setting. All data and code will be publicly released at https://github.com/LiuDongDaniel/SpineVLM.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Qi, Xiuzheng Yue, Miao Hu, Xin Wen, Yinyin Chen, Hang Jin, Chengyan Wang, Tao Li, Kunlun He
{"title":"SSDiff: A Contrast-Free Virtual LGE Generator for Acute Myocardial Infarction with Joint Segmentation via Diffusion Model.","authors":"Jing Qi, Xiuzheng Yue, Miao Hu, Xin Wen, Yinyin Chen, Hang Jin, Chengyan Wang, Tao Li, Kunlun He","doi":"10.1109/JBHI.2026.3689083","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3689083","url":null,"abstract":"<p><p>Myocardial infarction (MI) remains a major cause of death and disability. Although late gadolinium enhancement (LGE) cardiac MRI is the reference for assessing myocardial viability, it requires contrast injection, complex protocols, and added cost. Prior virtual LGE approaches-mostly GAN-based-mainly use cine or T1 mapping and ignore T2-weighted short-tau inversion recovery (T2-STIR), which is highly sensitive to edema in acute MI. They also typically require manual post-hoc delineation of infarcts. We propose SSDiff (Synthesis joint Segmentation Diffusion), a multitask conditional diffusion framework that synthesizes contrast-free virtual LGE from routine cine + T2-STIR for acute infarct assessment and simultaneously segments myocardium, ventricular blood pool, and infarct. SSDiff introduces a feature-disentangled attention module that isolates sequence-specific cues to steer the diffusion process, and a cross-fusion module that aligns synthesis and segmentation decoders for mutual optimization. Evaluated on a multi-center, multi-vendor dataset of 409 subjects (2,177 aligned cine-T2-STIR-LGE triplets), SSDiff yields significant gains in synthetic image quality and downstream segmentation accuracy over strong baselines. Beyond serving as a clinically feasible alternative when LGE is unavailable or contraindicated, SSDiff also generates paired image-mask samples that augment LGE-scarce training, highlighting its practical utility and translational potential. Code is available at: https://github.com/QijingGJ/SSDiff.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y H P P Priyadarshana, Nina Zhou, Pavitra Krishnaswamy, Jiang Ridong, Zilu Liang
{"title":"MediPhen: Prompt-Based LLM Reasoning with Synthesized Multimodal Clinical Knowledge for Zero-Shot Multi-morbidity Phenotyping.","authors":"Y H P P Priyadarshana, Nina Zhou, Pavitra Krishnaswamy, Jiang Ridong, Zilu Liang","doi":"10.1109/JBHI.2026.3689887","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3689887","url":null,"abstract":"<p><p>Clinical decision support using heterogeneous electronic health records (EHRs) is a well-established yet rapidly expanding research area. Large language model (LLM)-driven approaches have shown dominant performance in processing unstructured data such as clinical notes for disease phenotype classification. However, the absence of a unified reasoning framework capable of integrating structured laboratory results with unstructured clinical notes under zero-shot conditions limits progress in multimodal clinical decision support. To address this gap, we propose MediPhen, a novel reasoning framework that transfers LLMs for multi-morbidity disease phenotyping using multimodal clinical data. MediPhen introduces a framework for adapting LLMs to zero-shot disease phenotyping by incorporating extracted clinical entities, their relations, and lab narratives from EHRs, integrating a clinical knowledgebase to guide phenotype classification and enhance LLM transfer learning performance, and an explanation module that leverages chain-of-thought prompting to improve clinical reasoning. Comprehensive experiments conducted on MIMIC-III and MIMIC-IV benchmarks across multiple LLMs demonstrate the effectiveness of MediPhen. Notably, MedGemma-27B achieved state-of-the-art performance, improving micro averaged F1 scores by 19.92% on MIMIC-III and 16.68% on MIMIC-IV compared to fine-tuned baselines. These results highlight MediPhen as a zero-shot screening tool for multi morbidity phenotype classification, scalable within research infrastructures, advancing integration of structured and unstructured EHR data in clinical AI.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thamaraimanalan Thangarajan, Ashokkumar S R, Vijayakumar Pandi, Niyaz Hussain A M J, Mary Subaja Christo, Wajdy Othman
{"title":"A GPTAssisted Multi Modal Emotion Intelligence Framework for Mental Health Predictive Analytics using Physiological Signals.","authors":"Thamaraimanalan Thangarajan, Ashokkumar S R, Vijayakumar Pandi, Niyaz Hussain A M J, Mary Subaja Christo, Wajdy Othman","doi":"10.1109/JBHI.2026.3687117","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3687117","url":null,"abstract":"<p><p>The need to improve predictive analytics in healthcare demands strong frameworks that would be able to interpret the complicated physiological signals well. In this study, the researcher presents a complex multi-modal emotion recognition architecture of mental health care based on EEG, ECG, and GS Rrecordings with aGPT-based NLP interface to understand brief clinical text input and self-reported emotional responses. The framework com bines sophisticated preprocessing and synchronization can be performed using cross-correlation, noise reduction using discrete wavelet transform, and event segmentation then feature extraction can be done using wavelet scattering transform and statistics. The dimensionality reduction is through two-dimensional bidirectional principal component collaborative projection as well as the use of canonical correlation analysis to make sure that fusion of features is effective. According to the experimental assessment, provision of contextual embeddings produced by GPT results in a better interpretability score and contributes to clinical reasoning, which in turn improves the healthcare decision-making. Optimized WOA-KELM model performs significantly better than the traditional classifiers like SVM, k-NN, and XGBoost as well as the standard KELM with high valence and arousal classification rates of 96.93 and 99.05 respectively. Valence and arousal are treated as binary classification tasks for emotion recognition. The GPT module is used only for post-classification interpretation and does not influence the classification performance. Also, the GPT component demonstrates the possibilities of optimized, multi-model solutions to facilitate predictive healthcare analytics meaningfully and provide credible applications in emotion-aware diagnostics, mental health monitoring, adaptive human-computer interaction, and future ways of providing real-time and personalized healthcare solutions.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael L Tibbs, Benjamin McCloskey, Eric J Snider, Victor A Convertino, Lonnie G Petersen
{"title":"Breaking the Black Box: Interpretable AI Achieves Superior Hemorrhage Detection with the Compensatory Reserve Measurement.","authors":"Michael L Tibbs, Benjamin McCloskey, Eric J Snider, Victor A Convertino, Lonnie G Petersen","doi":"10.1109/JBHI.2026.3690009","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3690009","url":null,"abstract":"<p><p>Hemorrhage remains the leading cause of preventable trauma death, with traditional vital signs failing to detect blood loss until 25-30% volume depletion occurs. Compensatory Reserve Measurement (CRM) enables earlier hemorrhage detection but current estimation methods force a tradeoff between performance and interpretability. We present the first Vision Transformer (ViT) for CRM estimation that achieves both superior accuracy compared to previous models and mechanistic explainability from arterial blood pressure (ABP) waveforms. Using data from 208 human subjects who underwent progressive lower body negative pressure, we developed a single-layer ViT that processes 20-second waveform segments as token sequences. Rigorous 10-fold cross-validation compared the ViT against state of-the-art Convolutional Neural Network (CNN) and manual feature-based models using identical train-validation-test splits. With all models undergoing equivalent Optuna hyperparameter optimization, the ViT achieved higher R2 (0.80 vs 0.77) with fold-level paired t-test p = 0.052 (N = 10) and subject-level p = 0.008 (N = 208). The ViT also demonstrated superior robustness to signal corruption, with the CNN's performance degrading progressively faster under increasing noise and sample dropout. Attention analysis revealed learned patterns converging with established physiological knowledge, prioritizing half-decay and dicrotic notch regions identified as critical by manual feature extraction from the ABP. The model shifted from focused attention at high CRM to distributed monitoring at low CRM, matching known hemodynamics near decompensation. Ablation experiments confirmed half-decay regions as functionally critical. This work bridges the performance-interpretability tradeoff, providing the first interpretable deep learning approach for hemorrhage monitoring and CRM estimation.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147836737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingchi Chen, Wenxin Ma, Dazhou Li, Fa Zhu, Sidheswar Routray, Manisha Guduri, Martin Margala
{"title":"Kalman-Based Adaptive Moment Estimation Optimisation Algorithm to Enhance GPT in LLMs for Medical Sentiment Analysis of Patient Health-Related Feedback.","authors":"Xingchi Chen, Wenxin Ma, Dazhou Li, Fa Zhu, Sidheswar Routray, Manisha Guduri, Martin Margala","doi":"10.1109/JBHI.2025.3527340","DOIUrl":"10.1109/JBHI.2025.3527340","url":null,"abstract":"<p><p>The progress in Natural Language Processing (NLP) using Large Language Models (LLMs) has greatly improved medical sentiment analysis of patient feedback extraction from health-related question and answer. However, using LLMs to analyze such data often requires significant training data and computational resources, resulting in considerable increases in training costs and durations, which is one of the primary issues in applying LLMs to real-world healthcare scenarios. To tackle these challenges, a novel optimization algorithm named KAdam-EnGPT4LLM, based on Kalman filters and Adaptive Moment Estimation, is proposed to enhance training efficiency and reduce training costs of LLMs for analyzing patient feedback sentiment. Furthermore, the optimization algorithm KAdam-EnGPT4LLM is employed in training the LLM model GPT4ALL for medical sentiment analysis, resulting in the development of GPT4ALL-MediSentAly-KAdam, which leds to faster convergence and more stable training specifically for medical questions and answer in the context of healthcare. The results show that our GPT4ALL-MediSentAly-KAdam with the optimization algorithm KAdam-EnGPT4LLM achieved better performance that include the best Accuracy, Recall, F1-score, and Runtime for both datasets, outperforming traditional fine-tuned LLMs such as the classic GPT4ALL, Ada, Babbage, Curie, and Duvinci.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"3744-3753"},"PeriodicalIF":6.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}