Tianshu Zheng , Chuyang Ye , Zhaopeng Cui , Hui Zhang , Daniel C. Alexander , Dan Wu
{"title":"An extragradient and noise-tuning adaptive iterative network for diffusion MRI-based microstructural estimation","authors":"Tianshu Zheng , Chuyang Ye , Zhaopeng Cui , Hui Zhang , Daniel C. Alexander , Dan Wu","doi":"10.1016/j.media.2025.103535","DOIUrl":"10.1016/j.media.2025.103535","url":null,"abstract":"<div><div>Diffusion MRI (dMRI) is a powerful technique for investigating tissue microstructure properties. However, advanced dMRI models are typically complex and nonlinear, requiring a large number of acquisitions in the <em>q</em>-space. Deep learning techniques, specifically optimization-based networks, have been proposed to improve the model fitting with limited <em>q</em>-space data. Previous optimization procedures relied on the empirical selection of iteration block numbers and the network structures were based on the <em>iterative hard thresholding</em> (IHT) algorithm, which may suffer from instability during sparse reconstruction. In this study, we introduced an <em>extragradient and noise-tuning adaptive iterative network</em>, a generic network for estimating dMRI model parameters. We proposed an adaptive mechanism that flexibly adjusts the sparse representation process, depending on specific dMRI models, datasets, and downsampling strategies, avoiding manual selection and accelerating inference. In addition, we proposed a noise-tuning module to assist the network in escaping from local minimum/saddle points. The network also included an additional projection of the extragradient to ensure its convergence. We evaluated the performance of the proposed network on the <em>neurite orientation dispersion and density imaging</em> (NODDI) model and <em>diffusion basis spectrum imaging</em> (DBSI) model on two 3T <em>Human Connectome Project</em> (HCP) datasets and a 7T HCP dataset with six different downsampling strategies. The proposed framework demonstrated superior accuracy and generalizability compared to other state-of-the-art microstructural estimation algorithms.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103535"},"PeriodicalIF":10.7,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"5D image reconstruction exploiting space-motion-echo sparsity for accelerated free-breathing quantitative liver MRI","authors":"MungSoo Kang , Ricardo Otazo , Gerald Behr , Youngwook Kee","doi":"10.1016/j.media.2025.103532","DOIUrl":"10.1016/j.media.2025.103532","url":null,"abstract":"<div><div>Recent advances in 3D non-Cartesian multi-echo gradient-echo (mGRE) imaging and compressed sensing (CS)-based 4D (3D image space + 1D respiratory motion) motion-resolved image reconstruction, which applies temporal total variation to the respiratory motion dimension, have enabled free-breathing liver tissue MR parameter mapping. This technology now allows for robust reconstruction of high-resolution proton density fat fraction (PDFF), R<span><math><msubsup><mrow></mrow><mrow><mn>2</mn></mrow><mrow><mo>∗</mo></mrow></msubsup></math></span>, and quantitative susceptibility mapping (QSM), previously unattainable with conventional Cartesian mGRE imaging. However, long scan times remain a persistent challenge in free-breathing 3D non-Cartesian mGRE imaging. Recognizing that the underlying dimension of the imaging data is essentially 5D (4D + 1D echo signal evolution), we propose a CS-based 5D motion-resolved mGRE image reconstruction method to further accelerate the acquisition. Our approach integrates discrete wavelet transforms along the echo and spatial dimensions into a CS-based reconstruction model and devises a solution algorithm capable of handling such a 5D complex-valued array. Through phantom and in vivo human subject studies, we evaluated the effectiveness of leveraging unexplored correlations by comparing the proposed 5D reconstruction with the 4D reconstruction (i.e., motion-resolved reconstruction with temporal total variation) across a wide range of acceleration factors. The 5D reconstruction produced more reliable and consistent measurements of PDFF, R<span><math><msubsup><mrow></mrow><mrow><mn>2</mn></mrow><mrow><mo>∗</mo></mrow></msubsup></math></span>, and QSM compared to the 4D reconstruction. In conclusion, the proposed 5D motion-resolved image reconstruction demonstrates the feasibility of achieving accelerated, reliable, and free-breathing liver mGRE imaging for the measurement of PDFF, R<span><math><msubsup><mrow></mrow><mrow><mn>2</mn></mrow><mrow><mo>∗</mo></mrow></msubsup></math></span>, and QSM.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103532"},"PeriodicalIF":10.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junde Wu , Ziyue Wang , Mingxuan Hong , Wei Ji , Huazhu Fu , Yanwu Xu , Min Xu , Yueming Jin
{"title":"Medical SAM adapter: Adapting segment anything model for medical image segmentation","authors":"Junde Wu , Ziyue Wang , Mingxuan Hong , Wei Ji , Huazhu Fu , Yanwu Xu , Min Xu , Yueming Jin","doi":"10.1016/j.media.2025.103547","DOIUrl":"10.1016/j.media.2025.103547","url":null,"abstract":"<div><div>The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation due to its impressive capabilities in various segmentation tasks and its prompt-based interface. However, recent studies and individual experiments have shown that SAM underperforms in medical image segmentation due to the lack of medical-specific knowledge. This raises the question of how to enhance SAM’s segmentation capability for medical images. We propose the Medical SAM Adapter (Med-SA), which is one of the first methods to integrate SAM into medical image segmentation. Med-SA uses a light yet effective adaptation technique instead of fine-tuning the SAM model, incorporating domain-specific medical knowledge into the segmentation model. We also propose Space-Depth Transpose (SD-Trans) to adapt 2D SAM to 3D medical images and Hyper-Prompting Adapter (HyP-Adpt) to achieve prompt-conditioned adaptation. Comprehensive evaluation experiments on 17 medical image segmentation tasks across various modalities demonstrate the superior performance of Med-SA while updating only 2% of the SAM parameters (13M). Our code is released at <span><span>https://github.com/KidsWithTokens/Medical-SAM-Adapter</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103547"},"PeriodicalIF":10.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingyu Ai , Bin Huang , Fang Chen , Liu Shi , Binxuan Li , Shaoyu Wang , Qiegen Liu
{"title":"RED: Residual estimation diffusion for low-dose PET sinogram reconstruction","authors":"Xingyu Ai , Bin Huang , Fang Chen , Liu Shi , Binxuan Li , Shaoyu Wang , Qiegen Liu","doi":"10.1016/j.media.2025.103558","DOIUrl":"10.1016/j.media.2025.103558","url":null,"abstract":"<div><div>Recent advances in diffusion models have demonstrated exceptional performance in generative tasks across various fields. In positron emission tomography (PET), the reduction in tracer dose leads to information loss in sinograms. Using diffusion models to reconstruct missing information can improve imaging quality. Traditional diffusion models effectively use Gaussian noise for image reconstructions. However, in low-dose PET reconstruction, Gaussian noise can worsen the already sparse data by introducing artifacts and inconsistencies. To address this issue, we propose a diffusion model named residual estimation diffusion (RED). From the perspective of diffusion mechanism, RED uses the residual between sinograms to replace Gaussian noise in diffusion process, respectively sets the low-dose and full-dose sinograms as the starting point and endpoint of reconstruction. This mechanism helps preserve the original information in the low-dose sinogram, thereby enhancing reconstruction reliability. From the perspective of data consistency, RED introduces a drift correction strategy to reduce accumulated prediction errors during the reverse process. Calibrating the intermediate results of reverse iterations helps maintain the data consistency and enhances the stability of reconstruction process. In the experiments, RED achieved the best performance across all metrics. Specifically, the PSNR metric showed improvements of 2.75, 5.45, and 8.08 dB in DRF4, 20, and 100 respectively, compared to traditional methods. The code is available at: <span><span>https://github.com/yqx7150/RED</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103558"},"PeriodicalIF":10.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xueyu Liu , Guangze Shi , Rui Wang , Yexin Lai , Jianan Zhang , Weixia Han , Min Lei , Ming Li , Xiaoshuang Zhou , Yongfei Wu , Chen Wang , Wen Zheng
{"title":"Segment Any Tissue: One-shot reference guided training-free automatic point prompting for medical image segmentation","authors":"Xueyu Liu , Guangze Shi , Rui Wang , Yexin Lai , Jianan Zhang , Weixia Han , Min Lei , Ming Li , Xiaoshuang Zhou , Yongfei Wu , Chen Wang , Wen Zheng","doi":"10.1016/j.media.2025.103550","DOIUrl":"10.1016/j.media.2025.103550","url":null,"abstract":"<div><div>Medical image segmentation frequently encounters high annotation costs and challenges in task adaptation. While visual foundation models have shown promise in natural image segmentation, automatically generating high-quality prompts for class-agnostic segmentation of medical images remains a significant practical challenge. To address these challenges, we present Segment Any Tissue (SAT), an innovative, training-free framework designed to automatically prompt the class-agnostic visual foundation model for the segmentation of medical images with only a one-shot reference. SAT leverages the robust feature-matching capabilities of a pretrained foundation model to construct distance metrics in the feature space. By integrating these with distance metrics in the physical space, SAT establishes a dual-space cyclic prompt engineering approach for automatic prompt generation, optimization, and evaluation. Subsequently, SAT utilizes a class-agnostic foundation segmentation model with the generated prompt scheme to obtain segmentation results. Additionally, we extend the one-shot framework by incorporating multiple reference images to construct an ensemble SAT, further enhancing segmentation performance. SAT has been validated on six public and private medical segmentation tasks, capturing both macroscopic and microscopic perspectives across multiple dimensions. In the ablation experiments, automatic prompt selection enabled SAT to effectively handle tissues of various sizes, while also validating the effectiveness of each component. The comparative experiments show that SAT is comparable to, or even exceeds, some fully supervised methods. It also demonstrates superior performance compared to existing one-shot methods. In summary, SAT requires only a single pixel-level annotated reference image to perform tissue segmentation across various medical images in a training-free manner. This not only significantly reduces the annotation costs of applying foundational models to the medical field but also enhances task transferability, providing a foundation for the clinical application of intelligent medicine. Our source code is available at <span><span>https://github.com/SnowRain510/Segment-Any-Tissue</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103550"},"PeriodicalIF":10.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TriDeNT : Triple deep network training for privileged knowledge distillation in histopathology","authors":"Lucas Farndale , Robert Insall , Ke Yuan","doi":"10.1016/j.media.2025.103479","DOIUrl":"10.1016/j.media.2025.103479","url":null,"abstract":"<div><div>Computational pathology models rarely utilise data that will not be available for inference. This means most models cannot learn from highly informative data such as additional immunohistochemical (IHC) stains and spatial transcriptomics. We present TriDeNT <figure><img></figure>, a novel self-supervised method for utilising privileged data that is not available during inference to improve performance. We demonstrate the efficacy of this method for a range of different paired data including immunohistochemistry, spatial transcriptomics and expert nuclei annotations. In all settings, TriDeNT <figure><img></figure> outperforms other state-of-the-art methods in downstream tasks, with observed improvements of up to 101%. Furthermore, we provide qualitative and quantitative measurements of the features learned by these models and how they differ from baselines. TriDeNT <figure><img></figure> offers a novel method to distil knowledge from scarce or costly data during training, to create significantly better models for routine inputs.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103479"},"PeriodicalIF":10.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143748549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruobing Huang , Yinyu Ye , Ao Chang , Han Huang , Zijie Zheng , Long Tan , Guoxue Tang , Man Luo , Xiuwen Yi , Pan Liu , Jiayi Wu , Baoming Luo , Dong Ni
{"title":"Subtyping breast lesions via collective intelligence based long-tailed recognition in ultrasound","authors":"Ruobing Huang , Yinyu Ye , Ao Chang , Han Huang , Zijie Zheng , Long Tan , Guoxue Tang , Man Luo , Xiuwen Yi , Pan Liu , Jiayi Wu , Baoming Luo , Dong Ni","doi":"10.1016/j.media.2025.103548","DOIUrl":"10.1016/j.media.2025.103548","url":null,"abstract":"<div><div>Breast lesions display a wide spectrum of histological subtypes. Recognizing these subtypes is vital for optimizing patient care and facilitating tailored treatment strategies compared to a simplistic binary classification of malignancy. However, this task relies on invasive biopsy tests, which carry inherent risks and can lead to over-diagnosis, unnecessary expenses, and pain for patients. To avoid this, we propose to infer lesion subtypes from ultrasound images directly. Meanwhile, the incidence rates of different subtypes exhibit a skewed long-tailed distribution that presents substantial challenges for effective recognition. Inspired by collective intelligence in clinical diagnosis to handle complex or rare cases, we proposed a framework–CoDE–to amalgamate diverse expertise of different backbones to bolster robustness across varying scenarios for automated lesion subtyping. It utilizes dual-level balanced individual supervision to fully exploit prior knowledge while considering class imbalance. It is also equipped with a batch-based online competitive distillation module to stimulate dynamic knowledge exchange. Experimental results demonstrate that the model surpassed the state-of-the-art approaches by more than 7.22% in F1-score facing a challenging breast dataset with an imbalance ratio as high as 47.9:1.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103548"},"PeriodicalIF":10.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hulin Kuang , Yahui Wang , Xianzhen Tan , Jialin Yang , Jiarui Sun , Jin Liu , Wu Qiu , Jingyang Zhang , Jiulou Zhang , Chunfeng Yang , Jianxin Wang , Yang Chen
{"title":"LW-CTrans: A lightweight hybrid network of CNN and Transformer for 3D medical image segmentation","authors":"Hulin Kuang , Yahui Wang , Xianzhen Tan , Jialin Yang , Jiarui Sun , Jin Liu , Wu Qiu , Jingyang Zhang , Jiulou Zhang , Chunfeng Yang , Jianxin Wang , Yang Chen","doi":"10.1016/j.media.2025.103545","DOIUrl":"10.1016/j.media.2025.103545","url":null,"abstract":"<div><div>Recent models based on convolutional neural network (CNN) and Transformer have achieved the promising performance for 3D medical image segmentation. However, these methods cannot segment small targets well even when equipping large parameters. Therefore, We design a novel lightweight hybrid network that combines the strengths of CNN and Transformers (LW-CTrans) and can boost the global and local representation capability at different stages. Specifically, we first design a dynamic stem that can accommodate images of various resolutions. In the first stage of the hybrid encoder, to capture local features with fewer parameters, we propose a multi-path convolution (MPConv) block. In the middle stages of the hybrid encoder, to learn global and local features meantime, we propose a multi-view pooling based Transformer (MVPFormer) which projects the 3D feature map onto three 2D subspaces to deal with small objects, and use the MPConv block for enhancing local representation learning. In the final stage, to mostly capture global features, only the proposed MVPFormer is used. Finally, to reduce the parameters of the decoder, we propose a multi-stage feature fusion module. Extensive experiments on 3 public datasets for three tasks: stroke lesion segmentation, pancreas cancer segmentation and brain tumor segmentation, show that the proposed LW-CTrans achieves Dices of 62.35±19.51%, 64.69±20.58% and 83.75±15.77% on the 3 datasets, respectively, outperforming 16 state-of-the-art methods, and the numbers of parameters (2.08M, 2.14M and 2.21M on 3 datasets, respectively) are smaller than the non-lightweight 3D methods and close to the lightweight methods. Besides, LW-CTrans also achieves the best performance for small lesion segmentation.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103545"},"PeriodicalIF":10.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143642991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MGAug: Multimodal Geometric Augmentation in Latent Spaces of Image Deformations","authors":"Tonmoy Hossain, Miaomiao Zhang","doi":"10.1016/j.media.2025.103540","DOIUrl":"10.1016/j.media.2025.103540","url":null,"abstract":"<div><div>Geometric transformations have been widely used to augment the size of training images. Existing methods often assume a unimodal distribution of the underlying transformations between images, which limits their power when data with multimodal distributions occur. In this paper, we propose a novel model, <em>Multimodal Geometric Augmentation</em> (MGAug), that for the first time generates augmenting transformations in a multimodal latent space of geometric deformations. To achieve this, we first develop a deep network that embeds the learning of latent geometric spaces of diffeomorphic transformations (a.k.a. diffeomorphisms) in a variational autoencoder (VAE). A mixture of multivariate Gaussians is formulated in the tangent space of diffeomorphisms and serves as a prior to approximate the hidden distribution of image transformations. We then augment the original training dataset by deforming images using randomly sampled transformations from the learned multimodal latent space of VAE. To validate the efficiency of our model, we jointly learn the augmentation strategy with two distinct domain-specific tasks: multi-class classification on both synthetic 2D and real 3D brain MRIs, and segmentation on real 3D brain MRIs dataset. We also compare MGAug with state-of-the-art transformation-based image augmentation algorithms. Experimental results show that our proposed approach outperforms all baselines by significantly improved prediction accuracy. Our code is publicly available at <span><span>GitHub</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103540"},"PeriodicalIF":10.7,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A deep learning approach to multi-fiber parameter estimation and uncertainty quantification in diffusion MRI","authors":"William Consagra , Lipeng Ning , Yogesh Rathi","doi":"10.1016/j.media.2025.103537","DOIUrl":"10.1016/j.media.2025.103537","url":null,"abstract":"<div><div>Diffusion MRI (dMRI) is the primary imaging modality used to study brain microstructure <em>in vivo</em>. Reliable and computationally efficient parameter inference for common dMRI biophysical models is a challenging inverse problem, due to factors such as variable dimensionalities (reflecting the unknown number of distinct white matter fiber populations in a voxel), low signal-to-noise ratios, and non-linear forward models. These challenges have led many existing methods to use biologically implausible simplified models to stabilize estimation, for instance, assuming shared microstructure across all fiber populations within a voxel. In this work, we introduce a novel sequential method for multi-fiber parameter inference that decomposes the task into a series of manageable subproblems. These subproblems are solved using deep neural networks tailored to problem-specific structure and symmetry, and trained via simulation. The resulting inference procedure is largely amortized, enabling scalable parameter estimation and uncertainty quantification across all model parameters. Simulation studies and real imaging data analysis using the Human Connectome Project (HCP) demonstrate the advantages of our method over standard alternatives. In the case of the standard model of diffusion, our results show that under HCP-like acquisition schemes, estimates for extra-cellular parallel diffusivity are highly uncertain, while those for the intra-cellular volume fraction can be estimated with relatively high precision.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103537"},"PeriodicalIF":10.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143670260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}