A K N L Aththanagoda, K A S H Kulathilake, N A Abdullah
{"title":"Precision and Personalization: How Large Language Models Redefining Diagnostic Accuracy in Personalized Medicine - A Systematic Literature Review.","authors":"A K N L Aththanagoda, K A S H Kulathilake, N A Abdullah","doi":"10.1109/JBHI.2025.3584179","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3584179","url":null,"abstract":"<p><p>Personalized medicine aims to tailor medical treatments to the unique characteristics of each patient, but its effectiveness relies on achieving diagnostic accuracy to fully understand individual variability in disease response and treatment efficacy. This systematic literature review explores the role of large language models (LLMs) in enhancing diagnostic precision and supporting the advancement of personalized medicine. A comprehensive search was conducted across Web of Science, Science Direct, Scopus, and IEEE Xplore, targeting peer-reviewed articles published in English between January 2020 and March 2025 that applied LLMs within personalized medicine contexts. Following PRISMA guidelines, 39 relevant studies were selected and systematically analyzed. The findings indicate a growing integration of LLMs across key domains such as clinical informatics, medical imaging, patient-specific diagnosis, and clinical decision support. LLMs have shown potential in uncovering subtle data patterns critical for accurate diagnosis and personalized treatment planning. This review highlights the expanding role of LLMs in improving diagnostic accuracy in personalized medicine, offering insights into their performance, applications, and challenges, while also acknowledging limitations in generalizability due to variable model performance and dataset biases. The review highlights the importance of addressing challenges related to data privacy, model interpretability, and reliability across diverse clinical scenarios. For successful clinical integration, future research must focus on refining LLM technologies, ensuring ethical standards, and validating models continuously to safeguard effective and responsible use in healthcare environments.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144527665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingtao Lin, Xiahai Zhuang, Lin Pan, Mingjing Yang, Liqin Huang, Shun Chen, Lei Li
{"title":"ZSG-Net: A Zero-Shot Super-Resolution Guided Network for Ultrasound Image Segmentation and Classification.","authors":"Xingtao Lin, Xiahai Zhuang, Lin Pan, Mingjing Yang, Liqin Huang, Shun Chen, Lei Li","doi":"10.1109/JBHI.2025.3584505","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3584505","url":null,"abstract":"<p><p>Automated ultrasound (US) image analysis is hindered by challenges stemming from low resolution, noise, and non-uniform grayscale distribution, which compromise image quality. While many existing studies address these issues using super-resolution (SR) techniques, they often focus exclusively on SR without considering downstream tasks or tailoring to the unique characteristics of US images. In this work, we propose ZSG-Net, a zero-shot super-resolution-guided network, designed to bridge the gap between US image quality enhancement and its benefits in segmentation and classification. First, we introduce a zero-shot self-supervised cycle generative adversarial network (ZSCycle-GAN), tailored to the unique characteristics of US images, to perform SR while preserving critical structural details. Unlike conventional SR methods that focus solely on image enhancement, ZSCycle-GAN is designed to optimize downstream tasks. Second, we adopt a zero-shot self-supervised learning strategy, eliminating the reliance on labeled data and addressing the scarcity of annotated medical imaging datasets. Third, we incorporate a random image degradation (RID) strategy to expand the degradation space for clinical US images, enabling robust learning of diverse quality variations. Extensive experiments on three US image datasets validate the effectiveness of the proposed model. Results demonstrate superior performance in segmentation and classification tasks compared to existing approaches, underscoring the potential of our method to improve US image analysis in clinical settings.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144527666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MTPrior: A Multi-Task Hierarchical Graph Embedding Framework for Prioritizing Hepatocellular Carcinoma-Associated Genes and Long Noncoding RNAs.","authors":"Fatemeh Keikha, Zhi-Ping Liu","doi":"10.1109/JBHI.2025.3584342","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3584342","url":null,"abstract":"<p><p>Hepatocellular carcinoma (HCC) represents a highly prevalent liver cancer, posing a substantial global health challenge. The prioritization of both coding genes and noncoding RNAs, such as long noncoding RNAs (lncRNAs), is paramount in unraveling the mechanisms of HCC and advancing diagnostics, prognostics and therapeutic strategies. The development of computational models for prioritizing cancer-associated RNAs plays a pivotal role in reducing reliance on costly and time-consuming experimental methodologies. However, most existing approaches focus on a single factor, such as genes, lncRNAs, or microRNAs (miRNAs), neglecting the interactions between coding genes and noncoding RNAs as well as their combined influence. Models capable of prioritizing multiple RNA types while accounting for these interactions remain scarce. In this study, we introduce MTPrior, a multi-task graph embedding prioritization model. Our approach is designed to achieve multi-task prioritization by constructing an adaptable framework that accommodates diverse tasks and refines the network structure tailored to specific tasks. It meticulously considers interactions between coding and noncoding RNAs, navigating efficient biological pathways to discover the most pertinent results. By analyzing extensive datasets from HCC patients, alongside a comprehensive inventory of genes and lncRNAs, we have developed a model that proficiently prioritizes and identifies the most relevant genes and lncRNAs associated with HCC, thereby streamlining research efforts towards key candidates for further investigation. Furthermore, an ablation study underscores the effectiveness of each component within our proposed method. The convincing results demonstrate that MTPrior outperforms other state-of-the-art methods in predicting disease-related genes and lncRNAs, highlighting its efficiency and advantages.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144527662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongan Guo, Yeqi Huang, Yuao Wang, Yun Liu, Shenqi Jing, Tao Shan, Yuan Miao, Bo Li
{"title":"MDP-GRL: Multi-disease Prediction by Graph-enabled Representation Learning.","authors":"Yongan Guo, Yeqi Huang, Yuao Wang, Yun Liu, Shenqi Jing, Tao Shan, Yuan Miao, Bo Li","doi":"10.1109/JBHI.2025.3584916","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3584916","url":null,"abstract":"<p><p>In recent years, automatic disease prediction based on electronic health records (EHRs) has emerged as a focal area of research in medical informatics. While successfully facilitating disease diagnosis, this technique still suffers from many limitations caused by the complexity of medical data, particularly the diverse relations and shared risk factors among multiple diseases. Besides, the data sparsity and imbalanced problem in EHR also undermines the effectiveness of existing approaches. Therefore, new approaches are urgently needed to accommodate the EHR features better and make effective predictions on individuals' potential diseases. To address the above challenges, this paper proposes MDP-GRL, a novel multi-label disease prediction model based on graph-enabled representation learning. Specifically, MDP-GRL constructs a medical knowledge graph (MKG) based on the patient and disease information in EHR and then employs a graph neural network (GNN) to realise the disease prediction. To address the data sparsity issue, it incorporates supplementary data for both patients and diseases, i.e., enriching patient nodes by personal basic information, examination indicators, and illness history, and supplementing disease information with comorbidity information, prevalent populations, common causes, and diagnostic basis. To mitigate the data complexity issue, MDP-GRL considers four different relation patterns in MKG, which optimizes the modelling capabilities. To address the data imbalance problem, it introduces an attention mechanism and self-adversarial negative sampling strategy, which further enhance MDP-GRL's ability to identify error-prone and minority samples. Comprehensive experiments and ablation studies are conducted based on the MIMIC-IV dataset. The results demonstrate MDP-GRL's superiority in multi-disease prediction compared with state-of-the-art approaches.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144527661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ari Kusumastuti, Mohammad Isa Irawan, Kistosil Fahim
{"title":"Classification of Diabetic Patients using a Network Representation of Their Metabolism.","authors":"Ari Kusumastuti, Mohammad Isa Irawan, Kistosil Fahim","doi":"10.1109/JBHI.2025.3584067","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3584067","url":null,"abstract":"<p><p>Studies on Type 2 Diabetes Mellitus (T2DM) rely on specific metabolic networks to represent the intricate relationships between metabolites. Accurate classification requires analyzing network characteristics, such as distance graphs and topological similarities, and identifying features that effectively capture these aspects. This study focuses on deriving metabolic networks and applying graph embeddings to achieve optimal feature representation and classification performance. We extract metabolic networks from large patient cohorts and targeted tissues, comprising metabolism and gene expression data. We label patients into three groups: T2DM, non-T2DM, and Healthy based on the occurrence of T2DM enzymes in the referenced dataset. We build classification models using traditional machine learning techniques and Graph Neural Networks (GNNs) approaches based on extracted features. The models are evaluated on several statistical tests, identifying the best classification model for new patient data. The impact of interference factors in normalized feature data and perturbation on classification performance is also analyzed.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Whole Heart Segmentation Based on 3D Contour-guided Multi-head Attention Network From CT and MRI Images.","authors":"Feiyan Li, Weisheng Li, Yidong Peng, Yucheng Shu","doi":"10.1109/JBHI.2025.3584074","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3584074","url":null,"abstract":"<p><p>Heart image segmentation is a critical task in medical image processing, which is crucial for the diagnosis and treatment planning of cardiovascular diseases. It helps doctors understand patients' cardiac anatomy and functional status more comprehensively and lays the foundation for personalized medicine and precision medicine research. Addressing the current challenges of rough surfaces on the entire heart, incomplete segmentation of heart substructures, and the lack of structured prediction of pulmonary arteries due to artifacts, scale diversity, uneven intensity, and boundary ambiguity in cardiac computed tomography (CT) and magnetic resonance imaging (MRI) images, we propose a whole heart segmentation algorithm based on 3D contour guided network. The proposed algorithm achieves robust whole heart segmentation results and has few network structure parameters. To enhance the consistency of features extracted by the codec, we propose a 3D codec information integration module to focus on task-related areas. In the final stage of information integration, features of different scales are combined. A 3D contour attention module enhances the perception of the heart's structure and shape. Contour prediction results from the initial stage, generating a low-resolution voxel of the entire heart with contour details. The second stage builds upon the initial phase of secondary learning to achieve multi-label segmentation results. The proposed algorithm achieved average Dice scores of 0.905 and 0.865 for the CT and MRI modalities, respectively, in 40 cases.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhanshi Zhu, Qing Dong, Gongning Luo, Wei Wang, Suyu Dong, Kuanquan Wang, Ye Tian, Guohua Wang, Shuo Li
{"title":"Causality-Adjusted Data Augmentation for Domain Continual Medical Image Segmentation.","authors":"Zhanshi Zhu, Qing Dong, Gongning Luo, Wei Wang, Suyu Dong, Kuanquan Wang, Ye Tian, Guohua Wang, Shuo Li","doi":"10.1109/JBHI.2025.3584068","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3584068","url":null,"abstract":"<p><p>In domain continual medical image segmentation, distillation-based methods mitigate catastrophic forgetting by continuously reviewing old knowledge. However, these approaches often exhibit biases towards both new and old knowledge simultaneously due to confounding factors, which can undermine segmentation performance. To address these biases, we propose the Causality-Adjusted Data Augmentation (CauAug) framework, introducing a novel causal intervention strategy called the Texture-Domain Adjustment Hybrid-Scheme (TDAHS) alongside two causality-targeted data augmentation approaches: the Cross Kernel Network (CKNet) and the Fourier Transformer Generator (FTGen). (1) TDAHS establishes a domain-continual causal model that accounts for two types of knowledge biases by identifying irrelevant local textures (L) and domain-specific features (D) as confounders. It introduces a hybrid causal intervention that combines traditional confounder elimination with a proposed replacement approach to better adapt to domain shifts, thereby promoting causal segmentation. (2) CKNet eliminates confounder L to reduce biases in new knowledge absorption. It decreases reliance on local textures in input images, forcing the model to focus on relevant anatomical structures and thus improving generalization. (3) FTGen causally intervenes on confounder D by selectively replacing it to alleviate biases that impact old knowledge retention. It restores domain-specific features in images, aiding in the comprehensive distillation of old knowledge. Our experiments show that CauAug significantly mitigates catastrophic forgetting and surpasses existing methods in various medical image segmentation tasks. The implementation code is publicly available at: https://github.com/PerceptionComputingLab/CauAug_DCMIS.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zixin Yang, Jon S Heiselman, Cheng Han, Kelly Merrell, Richard Simon, Cristian A Linte
{"title":"Resolving the Ambiguity of Complete-to-Partial Point Cloud Registration for Image-Guided Liver Surgery with Patches-to-Partial Matching.","authors":"Zixin Yang, Jon S Heiselman, Cheng Han, Kelly Merrell, Richard Simon, Cristian A Linte","doi":"10.1109/JBHI.2025.3583875","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3583875","url":null,"abstract":"<p><p>In image-guided liver surgery, the initial rigid alignment between preoperative and intraoperative data, often represented as point clouds, is crucial for providing sub-surface information from preoperative CT/MRI images to the surgeon during the procedure. Currently, this alignment is typically performed using semi-automatic methods, which, while effective to some extent, are prone to errors that demand manual correction. Alternatively, correspondence-based point cloud registration methods further offer a promising fully automatic solution. However, they may struggle in scenarios with limited intraoperative surface visibility, a common challenge in liver surgery, particularly in laparoscopic procedures, which we refer to as complete-to-partial ambiguity. We first illustrate this ambiguity by evaluating the performance of state-of-the-art learning-based point cloud registration methods on our carefully constructed in silico and in vitro datasets. Then, we propose a patches-to-partial matching strategy as a plug-and-play module to resolve the ambiguity, which can be seamlessly integrated into learning-based registration methods without disrupting their end-to-end structure. This approach effectively improves registration performance, especially in low-visibility conditions, reducing registration errors to 6.7 mm (-29%) in silico and 12.5 mm (-40%) in vitro, compared to state-of-the-art performance achieved by Lepard of 9.5 mm and 20.7 mm, respectively. The constructed benchmark and the proposed module establish a solid foundation for advancing applications of point cloud correspondence-based registration methods in image-guided liver surgery. Our code and datasets will be released at https://github.com/zixinyang9109/P2P.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanchang Huang, Yadong Tang, Juhao Wu, Jiayan He, Wenlong Wang
{"title":"Automatic Detection and Segmentation of Tooth Cracks Based on Improved Mask R-CNN.","authors":"Yuanchang Huang, Yadong Tang, Juhao Wu, Jiayan He, Wenlong Wang","doi":"10.1109/JBHI.2025.3582650","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3582650","url":null,"abstract":"<p><p>Early diagnosis and intervention of cracked teeth are crucial for preventing further dental damage. However, the detection of cracked teeth remains challenging for dental clinicians due to the subtle, complex, and irregular feature of these cracks. To address this issue, we propose an improved Mask R-CNN instance segmentation network for the automatic detection and segmentation of cracked teeth. Firstly, the backbone network was replaced with ResNeXt-50(32×4d) to enhance the extraction of local features specific to cracks. Secondly, we introduced a Crack Feature Enhancement Module (CFEM), utilizing Bayesian optimization to fine-tune its hyperparameters, which leverages the pixel intensity differences between cracked and non-cracked regions to increase the sensitivity of the Feature Pyramid Network (FPN) to the complex features of cracks while suppressing irrelevant background information. Additionally, the mask head was redesigned into an encoder-decoder structure incorporating dynamic snake convolutions, which enables better capture of crack edge details and the integration of both deep and shallow feature information, with deep supervision applied to adjust the loss function weights. Extensive experiments and comprehensive evaluations demonstrate that our method outperforms the current state-of-the-art techniques. Furthermore, experiments on real intraoral images validate the effectiveness of our approach in detecting tooth cracks. Our model enables more accurate and earlier detection of cracked teeth, improving patient outcomes by allowing for timely interventions, reducing the need for invasive treatments, and preserving dental structure. Our code and datasets are available at https://github.com/YCHuang18/ToothCrack.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144505558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Locomotion-Mode Recognition and Transition Prediction with (Bio)Mechanical Sensor Fusion for Intelligent Prosthetic Knees.","authors":"Xiaoming Wang, Shaoping Bai, Linrong Li, Yuanhua Li, Hongliu Yu","doi":"10.1109/JBHI.2025.3583319","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3583319","url":null,"abstract":"<p><p>The ability to continuously recognize locomotion modes and accurately predict transition intentions is essential for intelligent prosthetic knees. In this study, an innovative framework for locomotion recognition and transition prediction was introduced based on fusing mechanical (inertial measurement unit (IMU)) and biomechanical (force myography (FMG)) signals. This framework integrated an FMG-IMU dual-modal sensing system implemented on a prosthetic knee, enabling simultaneous acquisition of FMG-IMU fusion signals from transfemoral amputees during dynamic walking. A novel feature-driven CNN-BiLSTM model was developed and trained as the classifier, enhancing the accuracy and efficiency of locomotion mode prediction. The RelifF-MI algorithm was employed to optimize FMG-IMU features, ensuring efficient data processing by effectively eliminating feature redundancy. The framework was evaluated using data collected from eight transfemoral amputees. The results demonstrated that the fusion of FMG-IMU dual-modal gait data with the feature-driven classifier significantly improved classification performance, achieving an overall average recognition accuracy of 98.51% and an average prediction time of 274 ms (21.82% of the gait cycle) across five locomotion modes-level walking (LW), stair ascent/descent (SA/SD), and ramp ascent/descent (RA/RD)-and eight transitions between these modes. These promising results highlighted the considerable potential of the proposed method for application in prosthetic knee control.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144496087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}