Yinran Chen;Baohui Fang;Huaying Li;Lijie Huang;Jianwen Luo
{"title":"Ultrafast Online Clutter Filtering for Ultrasound Microvascular Imaging","authors":"Yinran Chen;Baohui Fang;Huaying Li;Lijie Huang;Jianwen Luo","doi":"10.1109/TMI.2025.3535550","DOIUrl":"10.1109/TMI.2025.3535550","url":null,"abstract":"Spatiotemporal clutter filtering via robust principal component analysis (rPCA) has been widely used in ultrasound microvascular imaging. However, the performance of the rPCA clutter filtering highly relies on low-rank modeling for tissue signals and sparse modeling for blood flow signals. Moreover, current rPCA clutter filters are typically based on static processing and have to access a batch of beamformed frames for optimization. This prevents these filters from ultrafast realization. This paper adopts the iteratively reweighted least squares (IRLS) rPCA framework to model tissue and blood flow signals for improved clutter filtering. More importantly, the static IRLS-rPCA filter is upgraded to a spatiotemporal-constrained online method to instantaneously extract blood flow signals from the ongoing beamformed frame. Simulations and in-vivo experiments on a contrast-enhanced rat kidney and a contrast-free human liver demonstrated that the IRLS-rPCA clutter filter achieves higher sensitivity, contrast-to-noise ratio (CNR), and signal-to-noise ratio (SNR) than other rPCA methods. Particularly, the static IRLS-rPCA clutter filter obtains more than 2 dB improvements in CNR over the compared methods in the human liver dataset. The proposed online clutter filter achieves comparable image quality to the static version and processing time of <inline-formula> <tex-math>$0.028~pm ~0.004$ </tex-math></inline-formula> seconds per frame. The corresponding acceleration factor of the online clutter filter over all the tested methods is more than 20.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2477-2491"},"PeriodicalIF":0.0,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143054768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qinghui Liu;Elies Fuster-Garcia;Ivar Thokle Hovden;Bradley J. MacIntosh;Edvard O. S. Grødem;Petter Brandal;Carles Lopez-Mateu;Donatas Sederevičius;Karoline Skogen;Till Schellhorn;Atle Bjørnerud;Kyrre Eeg Emblem
{"title":"Treatment-Aware Diffusion Probabilistic Model for Longitudinal MRI Generation and Diffuse Glioma Growth Prediction","authors":"Qinghui Liu;Elies Fuster-Garcia;Ivar Thokle Hovden;Bradley J. MacIntosh;Edvard O. S. Grødem;Petter Brandal;Carles Lopez-Mateu;Donatas Sederevičius;Karoline Skogen;Till Schellhorn;Atle Bjørnerud;Kyrre Eeg Emblem","doi":"10.1109/TMI.2025.3533038","DOIUrl":"10.1109/TMI.2025.3533038","url":null,"abstract":"Diffuse gliomas are malignant brain tumors that grow widespread through the brain. The complex interactions between neoplastic cells and normal tissue, as well as the treatment-induced changes often encountered, make glioma tumor growth modeling challenging. In this paper, we present a novel end-to-end network capable of future predictions of tumor masks and multi-parametric magnetic resonance images (MRI) of how the tumor will look at any future time points for different treatment plans. Our approach is based on cutting-edge diffusion probabilistic models and deep-segmentation neural networks. We included sequential multi-parametric MRI and treatment information as conditioning inputs to guide the generative diffusion process as well as a joint segmentation process. This allows for tumor growth estimates and realistic MRI generation at any given treatment and time point. We trained the model using real-world postoperative longitudinal MRI data with glioma tumor growth trajectories represented as tumor segmentation maps over time. The model demonstrates promising performance across various tasks, including generating high-quality multi-parametric MRI with tumor masks, performing time-series tumor segmentations, and providing uncertainty estimates. Combined with the treatment-aware generated MRI, the tumor growth predictions with uncertainty estimates can provide useful information for clinical decision-making.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2449-2462"},"PeriodicalIF":0.0,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10851394","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143026342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manuel Tran;Sophia Wagner;Wilko Weichert;Christian Matek;Melanie Boxberg;Tingying Peng
{"title":"Navigating Through Whole Slide Images With Hierarchy, Multi-Object, and Multi-Scale Data","authors":"Manuel Tran;Sophia Wagner;Wilko Weichert;Christian Matek;Melanie Boxberg;Tingying Peng","doi":"10.1109/TMI.2025.3532728","DOIUrl":"10.1109/TMI.2025.3532728","url":null,"abstract":"Building deep learning models that can rapidly segment whole slide images (WSIs) using only a handful of training samples remains an open challenge in computational pathology. The difficulty lies in the histological images themselves: many morphological structures within a slide are closely related and very similar in appearance, making it difficult to distinguish between them. However, a skilled pathologist can quickly identify the relevant phenotypes. Through years of training, they have learned to organize visual features into a hierarchical taxonomy (e.g., identifying carcinoma versus healthy tissue, or distinguishing regions within a tumor as cancer cells, the microenvironment, …). Thus, each region is associated with multiple labels representing different tissue types. Pathologists typically deal with this by analyzing the specimen at multiple scales and comparing visual features between different magnifications. Inspired by this multi-scale diagnostic workflow, we introduce the Navigator, a vision model that navigates through WSIs like a domain expert: it searches for the region of interest at a low scale, zooms in gradually, and localizes ever finer microanatomical classes. As a result, the Navigator can detect coarse-grained patterns at lower resolution and fine-grained features at higher resolution. In addition, to deal with sparsely annotated samples, we train the Navigator with a novel semi-supervised framework called S5CL v2. The proposed model improves the F1 score by up to 8% on various datasets including our challenging new TCGA-COAD-30CLS and Erlangen cohorts.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 5","pages":"2002-2015"},"PeriodicalIF":0.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143020541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SCDM: Unified Representation Learning for EEG-to-fNIRS Cross-Modal Generation in MI-BCIs","authors":"Yisheng Li;Yishan Wang;Baiying Lei;Shuqiang Wang","doi":"10.1109/TMI.2025.3532480","DOIUrl":"10.1109/TMI.2025.3532480","url":null,"abstract":"Hybrid motor imagery brain-computer interfaces (MI-BCIs), which integrate both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) signals, outperform those based solely on EEG. However, simultaneously recording EEG and fNIRS signals is highly challenging due to the difficulty of colocating both types of sensors on the same scalp surface. This physical constraint complicates the acquisition of high-quality hybrid signals, thereby limiting the widespread application of hybrid MI-BCIs. To address this issue, this study proposes the spatio-temporal controlled diffusion model (SCDM) as a framework for cross-modal generation from EEG to fNIRS. The model utilizes two core modules, the spatial cross-modal generation (SCG) module and the multi-scale temporal representation (MTR) module, which adaptively learn the respective latent temporal and spatial representations of both signals in a unified representation space. The SCG module further maps EEG representations to fNIRS representations by leveraging their spatial relationships. Experimental results show high similarity between synthetic and real fNIRS signals. The joint classification performance of EEG and synthetic fNIRS signals is comparable to or even better than that of EEG with real fNIRS signals. Furthermore, the synthetic signals exhibit similar spatio-temporal features to real signals while preserving spatial relationships with EEG signals. To our knowledge, it is the first work that an end-to-end framework is proposed to achieve cross-modal generation from EEG to fNIRS. Experimental results suggest that the SCDM may represent a promising paradigm for the acquisition of hybrid EEG-fNIRS signals in MI-BCI systems.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2384-2394"},"PeriodicalIF":0.0,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142991278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RSAPower: Random Style Augmentation Driven Structure Perception Network for Generalized Retinal OCT Fluid Segmentation","authors":"Chenggang Lu;Zhitao Guo;Dan Zhang;Lei Mou;Jinli Yuan;Shaodong Ma;Da Chen;Yitian Zhao;Kewen Xia;Jiong Zhang","doi":"10.1109/TMI.2025.3531496","DOIUrl":"10.1109/TMI.2025.3531496","url":null,"abstract":"Optical Coherence Tomography (OCT) imaging is extensively utilized for non-invasive observation of pathological conditions, such as retinal fluid-associated diseases. Accurate fluid segmentation in OCT images is therefore critical for quantifying disease severity and aiding clinical decision-making. However, achieving precise segmentation remains challenging due to pathological variations in shape and size, uncertain boundaries, and low contrast of fluid. Most importantly, variability in OCT image styles across different vendors and centers significantly affects fluid segmentation, leading to poor generalization to unseen domains. To address this, we propose a novel method, RSAPower, to enhance the generalization ability of fluid perception networks via style augmentation for retinal fluid segmentation. Specifically, RSAPower comprises a plug-and-play random style transform augmentation (RSTAug) module and a novel fluid perception network (FLPNet) for end-to-end training. The RSTAug module generates new random-style data from the source domain, preserving realistic pathological and structural features. The FLPNet benefits from a novel hybrid structure attention (HSA) module to perceive fluid’s spatial features and long-range dependence. Furthermore, FLPNet adapts to the diverse augmented data through a saliency-guided multi-scale attention (SGMA) block, boosting its segmentation performance. We validate RSAPower against various state-of-the-art methods using two publicly available datasets, Retouch and Kermany. Experimental results demonstrate the proposed method’s superior generalization ability and effectiveness in fluid segmentation.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 5","pages":"2353-2367"},"PeriodicalIF":0.0,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142991279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Asymmetric Adaptive Heterogeneous Network for Multi-Modality Medical Image Segmentation","authors":"Shenhai Zheng;Xin Ye;Chaohui Yang;Lei Yu;Weisheng Li;Xinbo Gao;Yue Zhao","doi":"10.1109/TMI.2025.3526604","DOIUrl":"10.1109/TMI.2025.3526604","url":null,"abstract":"Existing studies of multi-modality medical image segmentation tend to aggregate all modalities without discrimination and employ multiple symmetric encoders or decoders for feature extraction and fusion. They often overlook the different contributions to visual representation and intelligent decisions among multi-modality images. Motivated by this discovery, this paper proposes an asymmetric adaptive heterogeneous network for multi-modality image feature extraction with modality discrimination and adaptive fusion. For feature extraction, it uses a heterogeneous two-stream asymmetric feature-bridging network to extract complementary features from auxiliary multi-modality and leading single-modality images, respectively. For feature adaptive fusion, the proposed Transformer-CNN Feature Alignment and Fusion (T-CFAF) module enhances the leading single-modality information, and the Cross-Modality Heterogeneous Graph Fusion (CMHGF) module further fuses multi-modality features at a high-level semantic layer adaptively. Comparative evaluation with ten segmentation models on six datasets demonstrates significant efficiency gains as well as highly competitive segmentation accuracy. (Our code is publicly available at <uri>https://github.com/joker-527/AAHN</uri>).","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1836-1852"},"PeriodicalIF":0.0,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142991455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FedBCD: Federated Ultrasound Video and Image Joint Learning for Breast Cancer Diagnosis","authors":"Tianpeng Deng;Chunwang Huang;Ming Cai;Yu Liu;Min Liu;Jiatai Lin;Zhenwei Shi;Bingchao Zhao;Jingqi Huang;Changhong Liang;Guoqiang Han;Zaiyi Liu;Ying Wang;Chu Han","doi":"10.1109/TMI.2025.3532474","DOIUrl":"10.1109/TMI.2025.3532474","url":null,"abstract":"Ultrasonography plays an essential role in breast cancer diagnosis. Current deep learning based studies train the models on either images or videos in a centralized learning manner, lacking consideration of joint benefits between two different modality models or the privacy issue of data centralization. In this study, we propose the first decentralized learning solution for joint learning with breast ultrasound video and image, called FedBCD. To enable the model to learn from images and videos simultaneously and seamlessly in client-level local training, we propose a Joint Ultrasound Video and Image Learning (JUVIL) model to bridge the dimension gap between video and image data by incorporating temporal and spatial adapters. The parameter-efficient design of JUVIL with trainable adapters and frozen backbone further reduces the computational cost and communication burden of federated learning, finally improving the overall efficiency. Moreover, considering conventional model-wise aggregation may lead to unstable federated training due to different modalities, data capacities in different clients, and different functionalities across layers. We further propose a Fisher information matrix (<underline>FI</u>M) guided <underline>L</u>ayer-wise <underline>A</u>ggregation method named FILA. By measuring layer-wise sensitivity with FIM, FILA assigns higher contributions to the clients with lower sensitivity, improving personalized performance during federated training. Extensive experiments on three image clients and one video client demonstrate the benefits of joint learning architecture, especially for the ones with small-scale data. FedBCD significantly outperforms nine federated learning methods on both video-based and image-based diagnoses, demonstrating the superiority and potential for clinical practice. Code is released at <uri>https://github.com/tianpeng-deng/FedBCD</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2395-2407"},"PeriodicalIF":0.0,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142991277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feedback Attention to Enhance Unsupervised Deep Learning Image Registration in 3D Echocardiography","authors":"Md. Kamrul Hasan;Yihao Luo;Guang Yang;Choon Hwai Yap","doi":"10.1109/TMI.2025.3530501","DOIUrl":"10.1109/TMI.2025.3530501","url":null,"abstract":"Cardiac motion estimation is important for assessing the contractile health of the heart, and performing this in 3D can provide advantages due to the complex 3D geometry and motions of the heart. Deep learning image registration (DLIR) is a robust way to achieve cardiac motion estimation in echocardiography, providing speed and precision benefits, but DLIR in 3D echo remains challenging. Successful unsupervised 2D DLIR strategies are often not effective in 3D, and there have been few 3D echo DLIR implementations. Here, we propose a new spatial feedback attention (FBA) module to enhance unsupervised 3D DLIR and enable it. The module uses the results of initial registration to generate a co-attention map that describes remaining registration errors spatially and feeds this back to the DLIR to minimize such errors and improve self-supervision. We show that FBA improves a range of promising 3D DLIR designs, including networks with and without transformer enhancements, and that it can be applied to both fetal and adult 3D echo, suggesting that it can be widely and flexibly applied. We further find that the optimal 3D DLIR configuration is when FBA is combined with a spatial transformer and a DLIR backbone modified with spatial and channel attention, which outperforms existing 3D DLIR approaches. FBA’s good performance suggests that spatial attention is a good way to enable scaling up from 2D DLIR to 3D and that a focus on the quality of the image after registration warping is a good way to enhance DLIR performance. Codes and data are available at: <uri>https://github.com/kamruleee51/Feedback_DLIR</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 5","pages":"2230-2243"},"PeriodicalIF":0.0,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142987523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learnable Prompting SAM-Induced Knowledge Distillation for Semi-Supervised Medical Image Segmentation","authors":"Kaiwen Huang;Tao Zhou;Huazhu Fu;Yizhe Zhang;Yi Zhou;Chen Gong;Dong Liang","doi":"10.1109/TMI.2025.3530097","DOIUrl":"10.1109/TMI.2025.3530097","url":null,"abstract":"The limited availability of labeled data has driven advancements in semi-supervised learning for medical image segmentation. Modern large-scale models tailored for general segmentation, such as the Segment Anything Model (SAM), have revealed robust generalization capabilities. However, applying these models directly to medical image segmentation still exposes performance degradation. In this paper, we propose a learnable prompting SAM-induced Knowledge distillation framework (KnowSAM) for semi-supervised medical image segmentation. Firstly, we propose a Multi-view Co-training (MC) strategy that employs two distinct sub-networks to employ a co-teaching paradigm, resulting in more robust outcomes. Secondly, we present a Learnable Prompt Strategy (LPS) to dynamically produce dense prompts and integrate an adapter to fine-tune SAM specifically for medical image segmentation tasks. Moreover, we propose SAM-induced Knowledge Distillation (SKD) to transfer useful knowledge from SAM to two sub-networks, enabling them to learn from SAM’s predictions and alleviate the effects of incorrect pseudo-labels during training. Notably, the predictions generated by our subnets are used to produce mask prompts for SAM, facilitating effective inter-module information exchange. Extensive experimental results on various medical segmentation tasks demonstrate that our model outperforms the state-of-the-art semi-supervised segmentation approaches. Crucially, our SAM distillation framework can be seamlessly integrated into other semi-supervised segmentation methods to enhance performance. The code will be released upon acceptance of this manuscript at <uri>https://github.com/taozh2017/KnowSAM</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 5","pages":"2295-2306"},"PeriodicalIF":0.0,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142987449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Lin;Xiao Fang;Dong Zhang;Kwang-Ting Cheng;Hao Chen
{"title":"Boosting Convolution With Efficient MLP-Permutation for Volumetric Medical Image Segmentation","authors":"Yi Lin;Xiao Fang;Dong Zhang;Kwang-Ting Cheng;Hao Chen","doi":"10.1109/TMI.2025.3530113","DOIUrl":"10.1109/TMI.2025.3530113","url":null,"abstract":"Recently, the advent of Vision Transformer (ViT) has brought substantial advancements in 3D benchmarks, particularly in 3D volumetric medical image segmentation (Vol-MedSeg). Concurrently, multi-layer perceptron (MLP) network has regained popularity among researchers due to their comparable results to ViT, albeit with the exclusion of the resource-intensive self-attention module. In this work, we propose a novel permutable hybrid network for Vol-MedSeg, named PHNet, which capitalizes on the strengths of both convolution neural networks (CNNs) and MLP. PHNet addresses the intrinsic anisotropy problem of 3D volumetric data by employing a combination of 2D and 3D CNNs to extract local features. Besides, we propose an efficient multi-layer permute perceptron (MLPP) module that captures long-range dependence while preserving positional information. This is achieved through an axis decomposition operation that permutes the input tensor along different axes, thereby enabling the separate encoding of the positional information. Furthermore, MLPP tackles the resolution sensitivity issue of MLP in Vol-MedSeg with a token segmentation operation, which divides the feature into smaller tokens and processes them individually. Extensive experimental results validate that PHNet outperformed the state-of-the-art methods with lower computational costs on the widely-used yet challenging COVID-19-20, Synapse, LiTS and MSD BraTS benchmarks. The ablation study also demonstrated the effectiveness of PHNet in harnessing the strengths of both CNNs and MLP. The code is available on Github: <uri>https://github.com/xiaofang007/PHNet</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 5","pages":"2341-2352"},"PeriodicalIF":0.0,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142987447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}