{"title":"GAGM: Geometry-aware graph matching framework for weakly supervised gyral hinge correspondence","authors":"Zhibin He, Wuyang Li, Tianming Liu, Xiang Li, Junwei Han, Tuo Zhang, Yixuan Yuan","doi":"10.1016/j.media.2025.103820","DOIUrl":"https://doi.org/10.1016/j.media.2025.103820","url":null,"abstract":"Achieving precise alignment of inter-subject brain landmarks, such as the gyral hinge (GH), would enhance the correspondence of brain function across subjects, thereby advancing our understanding of brain anatomy-function relationship and brain mechanisms. Recent methods mainly focus on identifying the correspondences of GHs by utilizing point-to-point ground truth. However, labeling point-to-point GH correspondences between subjects for the entire brain is laborious and time-consuming, given the presence of over 400 GHs per brain. To remedy this problem, we propose a Geometry-Aware Graph Matching framework, dubbed GAGM, for weakly supervised gyral hinge correspondence solely based on brain prior information. Specifically, we propose a Shape-Aware Graph Establishment (SAGE) module to ensure a comprehensive representation of geometry features in GH. SAGE constructs a structured graph by incorporating GH coordinates, shapes, and inter-GH relationships to model entire brain GHs and learns the spatial relation between them. Moreover, to reduce the optimization difficulties, Region-Aware Graph Matching (RAGM) module is proposed for multi-scale matching. RAGM leverages prior knowledge of the multi-scale relationship between GHs and brain regions and incorporates inter-scale semantic consistency to ensure both intra-region consistency and inter-region variability of GH features, ultimately achieving accurate GH matching. Extensive experiments on two public datasets, HCP and CHCP, demonstrate the superiority of our method over state-of-the-art methods. Our code: <ce:inter-ref xlink:href=\"https://github.com/ZhibinHe/GAGM\" xlink:type=\"simple\">https://github.com/ZhibinHe/GAGM</ce:inter-ref>.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"53 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145182903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Knowledge distillation and teacher–student learning in medical imaging: Comprehensive overview, pivotal role, and future directions","authors":"Xiang Li, Like Li, Minglei Li, Pengfei Yan, Ting Feng, Hao Luo, Yong Zhao, Shen Yin","doi":"10.1016/j.media.2025.103819","DOIUrl":"https://doi.org/10.1016/j.media.2025.103819","url":null,"abstract":"Knowledge Distillation (KD) is a technique to transfer the knowledge from a complex model to a simplified model. It has been widely used in natural language processing and computer vision and has achieved advanced results. Recently, the research of KD in medical image analysis has grown rapidly. The definition of knowledge has been further expanded by combining with the medical field, and its role is not limited to simplifying the model. This paper attempts to comprehensively review the development and application of KD in the medical imaging field. Specifically, we first introduce the basic principles, explain the definition of knowledge and the classical teacher–student network framework. Then, the research progress in medical image classification, segmentation, detection, reconstruction, registration, radiology report generation, privacy protection and other application scenarios is presented. In particular, the introduction of application scenarios is based on the role of KD. We summarize eight main roles of KD techniques in medical image analysis, including model compression, semi-supervised method, weakly supervised method, class balancing, etc. The performance of these roles in all application scenarios is analyzed. Finally, we discuss the challenges in this field and propose potential solutions. KD is still in a rapid development stage in the medical imaging field, we give five potential development directions and research hotspots. A comprehensive literature list of this survey is available at <ce:inter-ref xlink:href=\"https://github.com/XiangQA-Q/KD-in-MIA\" xlink:type=\"simple\">https://github.com/XiangQA-Q/KD-in-MIA</ce:inter-ref>.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"20 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145182904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bangkang Fu , Junjie He , Xiaoli Zhang , Yunsong Peng , Zhuxu Zhang , Qi Tang , Xinfeng Liu , Ying Cao , Rongpin Wang
{"title":"HSFSurv: A hybrid supervision framework at individual and feature levels for multimodal cancer survival analysis","authors":"Bangkang Fu , Junjie He , Xiaoli Zhang , Yunsong Peng , Zhuxu Zhang , Qi Tang , Xinfeng Liu , Ying Cao , Rongpin Wang","doi":"10.1016/j.media.2025.103810","DOIUrl":"10.1016/j.media.2025.103810","url":null,"abstract":"<div><div>Multimodal data play a significant role in survival analysis, with pathological images providing morphological information about tumors and genomic data offering molecular insights. Leveraging multimodal data for survival analysis has become a prominent research topic. However, the heterogeneity of data poses significant challenges to multimodal integration. While existing methods consider interactions among features from different modalities, the heterogeneity of feature spaces often hinders performance in survival analysis. In this paper, we propose a hybrid supervised framework for survival analysis (HSFSurv) based on multimodal feature decomposition. This framework utilizes a multimodal feature decomposition module to partition features into highly correlated and modality-specific components, facilitating targeted feature fusion in subsequent steps. To alleviate feature space heterogeneity, we design an individual-level uncertainty minimization (UMI) module to ensure consistency in prediction outcomes. Additionally, we develop a feature-level multimodal cohort contrastive learning (MCF) module to enforce consistency across features. Moreover, a probabilistic decay detection module with a supervisory signal is introduced to guide the contrastive learning process. These modules are jointly trained to project multimodal features into a shared latent vector space. Finally, we fine-tune the framework for survival analysis tasks to achieve prognostic predictions. Experimental results on five cancer datasets demonstrate the state-of-the-art performance of the proposed multimodal fusion framework in survival analysis.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103810"},"PeriodicalIF":11.8,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145156556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junxia Wang , Jing Wang , Jun Ma , Baijing Chen , Zeyuan Chen , Yuanjie Zheng
{"title":"CaliDiff: Multi-rater annotation calibrating diffusion probabilistic model towards medical image segmentation","authors":"Junxia Wang , Jing Wang , Jun Ma , Baijing Chen , Zeyuan Chen , Yuanjie Zheng","doi":"10.1016/j.media.2025.103812","DOIUrl":"10.1016/j.media.2025.103812","url":null,"abstract":"<div><div>Medical image segmentation is critical for accurate diagnostics and effective treatment planning. Traditional multi-rater labeling strategies, while integrating consensus from multiple experts, often do not fully capture the unique insights of individual raters. Moreover, deep discriminative models that aggregate such expert labels typically embed inherent biases into the segmentation results. To address these issues, we introduce CaliDiff, a novel multi-rater annotation calibration diffusion probabilistic model. This model effectively approximates the joint probability distribution among multiple expert annotations and their corresponding images, fully leveraging diverse expert knowledge while actively refining these annotations to approximate the true underlying distribution closely. CaliDiff operates through a structured multi-stage process: it begins with a shared-parameter inverse diffusion to normalize initial expert biases, followed by Expertness Consistent Alignment to minimize variance among annotations and enhance consistency in high-confidence areas. Additionally, we incorporate a Committee-based Endogenous Knowledge Learning mechanism that uses adversarial soft supervision to simulate a reliable pseudo-ground truth, integrating Cross-Expert Fusion and Implicit Consensus Inference. Extensive experimental evaluations on various medical image segmentation datasets show that CaliDiff not only significantly improves the calibration of annotations but also achieves state-of-the-art performance, thereby enhancing the reliability and objectivity of medical diagnostics.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103812"},"PeriodicalIF":11.8,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145156552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Duwei Dai , Caixia Dong , Haolin Huang , Fan Liu , Zongfang Li , Songhua Xu
{"title":"Improving the performance of medical image segmentation with instructive feature learning","authors":"Duwei Dai , Caixia Dong , Haolin Huang , Fan Liu , Zongfang Li , Songhua Xu","doi":"10.1016/j.media.2025.103818","DOIUrl":"10.1016/j.media.2025.103818","url":null,"abstract":"<div><div>Although deep learning models have greatly automated medical image segmentation, they still struggle with complex samples, especially those with irregular shapes, notable scale variations, or blurred boundaries. One key reason for this is that existing methods often overlook the importance of identifying and enhancing the instructive features tailored to various targets, thereby impeding optimal feature extraction and transmission. To address these issues, we propose two innovative modules: an Instructive Feature Enhancement Module (IFEM) and an Instructive Feature Integration Module (IFIM). IFEM synergistically captures rich detailed information and local contextual cues within a unified convolutional module through flexible resolution scaling and extensive information interplay, thereby enhancing the network’s feature extraction capabilities. Meanwhile, IFIM explicitly guides the fusion of encoding–decoding features to create more discriminative representations through sensitive intermediate predictions and omnipresent attention operations, thus refining contextual feature transmission. These two modules can be seamlessly integrated into existing segmentation frameworks, significantly boosting their performance. Furthermore, to achieve superior performance with substantially reduced computational demands, we develop an effective and efficient segmentation framework (EESF). Unlike traditional U-Nets, EESF adopts a shallower and wider asymmetric architecture, achieving a better balance between fine-grained information retention and high-order semantic abstraction with minimal learning parameters. Ultimately, by incorporating IFEM and IFIM into EESF, we construct EE-Net, a high-performance and low-resource segmentation network. Extensive experiments across six diverse segmentation tasks consistently demonstrate that EE-Net outperforms a wide range of competing methods in terms of segmentation performance, computational efficiency, and learning ability. The code is available at <span><span>https://github.com/duweidai/EE-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103818"},"PeriodicalIF":11.8,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145156555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Anatomical structure-guided joint spatiotemporal graph embedding framework for magnetic resonance fingerprint reconstruction","authors":"Peng Li , Jianxing Liu , Yue Hu","doi":"10.1016/j.media.2025.103816","DOIUrl":"10.1016/j.media.2025.103816","url":null,"abstract":"<div><div>Highly undersampled acquisition schemes in magnetic resonance fingerprinting (MRF) typically introduce aliasing artifacts, degrading the accuracy of quantitative imaging. While state-of-the-art graph-based reconstruction methods have shown promise in addressing this challenge by leveraging non-local and non-linear correlations in MRF data, they often face two critical limitations: high computational costs associated with large-scale graph structure estimation and limited capacity to capture complex spatiotemporal dynamics. To overcome these challenges, this study proposes an anatomical structure-guided joint spatiotemporal graph embedding framework for MRF reconstruction. By integrating anatomical segmentation and homogeneity clustering, our framework partitions MRF data into spatially contiguous regions and groups them into clusters based on tissue homogeneity. Subgraphs are then constructed for each cluster, capturing non-local spatial correlations while preserving fine-grained temporal signal dynamics. The hierarchical graph embedding architecture enables efficient focusing on critical correlations, significantly improving reconstruction performance and reducing computational complexity. Numerical experiments on both simulated and <em>in vivo</em> MRF datasets demonstrate that our method outperforms state-of-the-art methods, achieving a <span><math><mo>∼</mo></math></span>2 dB higher signal-to-noise ratio (SNR) in reconstructed data and a <span><math><mo>∼</mo></math></span>70% reduction in reconstruction time. The source code is publicly available at <span><span>https://github.com/bigponglee/SP_GE_MRF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103816"},"PeriodicalIF":11.8,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145156553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiazhen Wang, Zhihao Shi, Xiang Gu, Yan Yang, Jian Sun
{"title":"Diffusion-based arbitrary-scale magnetic resonance image super-resolution via progressive k-space reconstruction and denoising","authors":"Jiazhen Wang, Zhihao Shi, Xiang Gu, Yan Yang, Jian Sun","doi":"10.1016/j.media.2025.103814","DOIUrl":"10.1016/j.media.2025.103814","url":null,"abstract":"<div><div>Acquiring high-resolution Magnetic resonance (MR) images is challenging due to constraints such as hardware limitations and acquisition times. Super-resolution (SR) techniques offer a potential solution to enhance MR image quality without changing the magnetic resonance imaging (MRI) hardware. However, typical SR methods are designed for fixed upsampling scales and often produce over-smoothed images that lack fine textures and edge details. To address these issues, we propose a unified diffusion-based framework for arbitrary-scale in-plane MR image SR, dubbed Progressive Reconstruction and Denoising Diffusion Model (PRDDiff). Specifically, the forward diffusion process of PRDDiff gradually masks out high-frequency components and adds Gaussian noise to simulate the downsampling process in MRI. To reverse this process, we propose an Adaptive Resolution Restoration Network (ARRNet), which introduces a current step corresponding to the resolution of input MR image and an ending step corresponding to the target resolution. This design guide the ARRNet to recovering the clean MR image at the target resolution from input MR image. The SR process starts from an MR image at the initial resolution and gradually enhances them to higher resolution by progressively reconstructing high-frequency components and removing the noise based on the recovered MR image from ARRNet. Furthermore, we design a multi-stage SR strategy that incrementally enhances resolution through multiple sequential stages to further improve recovery accuracy. Each stage utilizes a set number of sampling steps from PRDDiff, guided by a specific ending step, to recover details pertinent to the predefined intermediate resolution. We conduct extensive experiments on fastMRI knee dataset, fastMRI brain dataset, our real-collected LR-HR brain dataset, and clinical pediatric cerebral palsy (CP) dataset, including T1-weighted and T2-weighted images for the brain and proton density-weighted images for the knee. The results demonstrate that PRDDiff outperforms previous MR image super-resolution methods in term of reconstruction accuracy, generalization, and downstream lesion segmentation accuracy and CP classification performance. The code is publicly available at <span><span>https://github.com/Jiazhen-Wang/PRDDiff-main</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103814"},"PeriodicalIF":11.8,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Biao He , Erni Ji , Xiaofen Zong , Zhen Liang , Gan Huang , Li Zhang
{"title":"GraSTI-ACL: Graph spatial–temporal infomax with adversarial contrastive learning for brain disorders diagnosis based on resting-state fMRI","authors":"Biao He , Erni Ji , Xiaofen Zong , Zhen Liang , Gan Huang , Li Zhang","doi":"10.1016/j.media.2025.103815","DOIUrl":"10.1016/j.media.2025.103815","url":null,"abstract":"<div><div>Resting-state functional magnetic resonance imaging (rs-fMRI) has been widely used in research on brain disorders due to its informative spatial and temporal resolution, and it shows growing potential as a noninvasive tool for assisting clinical diagnosis. Among various methods based on rs-fMRI, graph neural networks have received significant attention because of their inherent structural similarity to functional connectivity networks (FCNs) of the brain. However, constructing FCNs that effectively capture both spatial and temporal information from rs-fMRI remains challenging, as traditional methods often rely on static, fully connected graphs that risk redundancy and neglect dynamic patterns. Based on the information bottleneck principle, this paper proposes a graph augmentation strategy named Graph Spatial–Temporal Infomax (GraSTI) to adaptively preserve both global spatial brain-wide FCNs and local temporal dynamics. We integrate GraSTI with theoretical explanations and design a practical implementation to adapt to our graph augmentation strategy and enhance feature capture capability. Furthermore, GraSTI is incorporated into an adversarial contrastive learning framework to achieve a mutual information equilibrium between graph representation effectiveness and robustness for downstream brain disorders diagnosis tasks. The proposed method is evaluated on datasets from three different brain disorders: Alzheimer’s disease (AD), major depressive disorder (MDD), and bipolar disorder (BD). Extensive experiments demonstrate that the proposed GraSTI-ACL achieves diagnostic accuracy gains of 0.13% to 23.56% for AD, 1.23% to 13.81% for MDD, and 2.53% to 24.53% for BD diagnosis over existing methods. Meanwhile, our method demonstrates strong interpretability in identifying relevant brain regions and connectivities for different brain disorders.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103815"},"PeriodicalIF":11.8,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145156554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SuperDiff: A diffusion super-resolution method for digital pathology with comprehensive quality assessment","authors":"Xuan Xu, Saarthak Kapse, Prateek Prasanna","doi":"10.1016/j.media.2025.103808","DOIUrl":"https://doi.org/10.1016/j.media.2025.103808","url":null,"abstract":"Digital pathology has advanced significantly over the last decade, with Whole Slide Images (WSIs) encompassing vast amounts of data essential for accurate disease diagnosis. High-resolution WSIs are essential for precise diagnosis but technical limitations in scanning equipment and variability in slide preparation can hinder obtaining these images. Super-resolution techniques can enhance low-resolution images; while Generative Adversarial Networks (GANs) have been effective in natural image super-resolution tasks, they often struggle with histopathology due to overfitting and mode collapse. Traditional evaluation metrics fall short in assessing the complex characteristics of histopathology images, necessitating robust histology-specific evaluation methods.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"4 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145182905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PE-RBNAS: A robust neural architecture search with progressive-enhanced strategies for brain network classification","authors":"Xingyu Wang, Junzhong Ji, Gan Liu, Yadong Xiao","doi":"10.1016/j.media.2025.103813","DOIUrl":"10.1016/j.media.2025.103813","url":null,"abstract":"<div><div>Functional Brain Network (FBN) classification methods based on Neural Architecture Search (NAS) have been increasingly emerging, with their core advantage being the ability to automatically construct high-quality network architectures. However, existing methods exhibit poor robustness when dealing with FBNs that have inherent high-noise characteristics. To address these issues, we propose a robust NAS with progressive-enhanced strategies for FBN classification. Specifically, this method adopts Particle Swarm Optimization as the search method, while treating candidate architectures as individuals, and proposes two progressive-enhanced (PE) strategies to optimize the critical stages of population sampling and fitness evaluation. In the population sampling stage, we first utilize Latin Hypercube Sampling to initialize a small-scale population, ensuring a broad search range. Subsequently, to reduce random fluctuations in searches, we propose a PE supplementary sampling strategy that identifies advantageous regions of the solution space, and performs precise supplementary sampling of the population. In the fitness evaluation stage, to enhance the noise resistance of the searched architectures, we propose a PE fitness evaluation strategy. This strategy first evaluates individual fitness separately using both original data and artificially constructed noise-augmented data, then combines the two fitness scores through a novel progressive formula to determine the final individual fitness. Experiments were conducted on two public datasets: the ABIDE I dataset (1,112 subjects, 17 sites), and ADHD-200 (776 subjects, 8 sites), using AAL/CC200 atlases. Results demonstrate that PE-RBNAS achieves state-of-the-art performance, with 72.61% accuracy on clean ABIDE I data (vs. 71.05% for MC-APSONAS) and 71.82% accuracy under 0.2 noise (vs. 68.15% for PSO-BNAS). The results indicate that, compared to other methods, the proposed method demonstrates better model performance and superior noise resistance.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103813"},"PeriodicalIF":11.8,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}