{"title":"Ultra-Sparse-View Cone-Beam CT Reconstruction-Based Strictly Structure-Preserved Deep Neural Network in Image-Guided Radiation Therapy","authors":"Ying Song;Weikang Zhang;Tianxiong Wu;Yong Luo;Jiangyuan Shi;Xinjian Yang;Zhonghua Deng;Xu Qi;Guangjun Li;Sen Bai;Jun Zhao;Renming Zhong","doi":"10.1109/TMI.2025.3541242","DOIUrl":"10.1109/TMI.2025.3541242","url":null,"abstract":"Radiation therapy is regarded as the mainstay treatment for cancer in clinic. Kilovoltage cone-beam CT (CBCT) images have been acquired for most treatment sites as the clinical routine for image-guided radiation therapy (IGRT). However, repeated CBCT scanning brings extra irradiation dose to the patients and decreases clinical efficiency. Sparse CBCT scanning is a possible solution to the problems mentioned above but at the cost of inferior image quality. To decrease the extra dose while maintaining the CBCT quality, deep learning (DL) methods are widely adopted. In this study, planning CT was used as prior information, and the corresponding strictly structure-preserved CBCT was simulated based on the attenuation information from the planning CT. We developed a hyper-resolution ultra-sparse-view CBCT reconstruction model, known as the planning CT-based strictly-structure-preserved neural network (PSSP-NET), using a generative adversarial network (GAN). This model utilized clinical CBCT projections with extremely low sampling rates for the rapid reconstruction of high-quality CBCT images, and its clinical performance was evaluated in head-and-neck cancer patients. Our experiments demonstrated enhanced performance and improved reconstruction speed.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2605-2616"},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Unbiased Activation Maps for Weakly Supervised Tissue Segmentation of Histopathological Images","authors":"Yuxin Kang;Hansheng Li;Xiaoshuang Shi;Xiao Zhang;Yaqiong Xing;Yuting Wen;Yi Wang;Lei Cui;Jun Feng;Lin Yang","doi":"10.1109/TMI.2025.3541115","DOIUrl":"10.1109/TMI.2025.3541115","url":null,"abstract":"Tissue segmentation in histopathological images plays a crucial role in computational pathology, owing to its significant potential to indicate the prognosis of cancer patients. Presently, numerous Weakly Supervised Semantic Segmentation (WSSS) methods strive to utilize image-level labels to achieve pixel-level segmentation, aiming to minimize the need for detailed annotations. Most of these methods rely on Class Activation Maps (CAM) extracted from classification models, frequently leading to poor coverage of objects. The major cause is attributed to the strong inductive bias of the classification model, focusing primarily on discriminative feature of objects, rather than non-discriminative features. Inspired by this, we propose a simple yet effective method that introduces a self-supervised task by exploiting both the discriminative and non-discriminative features, and generate Unbiased Activation Maps (UAM) to encompass the whole object. Specifically, our method entails clustering all spatial features of an object class to derive semantic centers. Each center then works as a spatial filter that amplifies similar feature and suppresses dissimilar feature, and extract high-quality pseudo-labels (some noise at object boundaries). Moreover, we further propose a Noise-Reduced (NR) Learning method to train the segmentation network towards credible signals and lessen the impact of false predictions. Comprehensive experimental results on two public histopathology image datasets demonstrate the superior performance of our method over the state-of-the-art weakly supervised segmentation methods.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2631-2642"},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Strip Convolution and Adaptive Morphology Perception Plugin for Medical Anatomy Segmentation","authors":"Guyue Hu;Yukun Kang;Gangming Zhao;Zhe Jin;Chenglong Li;Jin Tang","doi":"10.1109/TMI.2025.3540211","DOIUrl":"10.1109/TMI.2025.3540211","url":null,"abstract":"Medical anatomy segmentation is essential for computer-aided diagnosis and lesion localization in medical images. For example, segmenting individual ribs benefits localizing the lung lesions and providing vital medical measurements (such as rib spacing) for generating medical reports. Existing methods segment shape-different anatomies (such as striped ribs, bulky lungs, and angular scapula) with the same network architecture, the morphology heterogeneity is heavily overlooked. Although some shape-aware operators like deformable convolution and dynamic snake convolution have been introduced to cater to specific object morphology, they still struggle with orientation-varying strip structures, such as 24 ribs and 2 clavicles. In this paper, we propose a novel convolution plugin (DSC-AMP) for medical anatomy segmentation, which is comprised of a dynamic strip convolution (DSC) operator and an adaptive morphology perception (AMP) strategy. Specifically, the dynamic strip convolution customizes gradually varying directions and offsets for each local region, achieving dynamic striped receptive fields. Additionally, the adaptive morphology perception strategy incorporates insights from various shape-aware convolutional kernels, enabling the model to discern and integrate crucial representations corresponding to heterogeneous anatomies. Extensive experiments on two large-scale datasets demonstrate the effectiveness and superiority of the proposed approach for tackling heterogeneous medical anatomy segmentation.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2541-2552"},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hong Wang;Yichen Wu;Yongbo Wang;Dong Wei;Xian Wu;Jianhua Ma;Yefeng Zheng
{"title":"Adaptive Weighting Based Metal Artifact Reduction in CT Images","authors":"Hong Wang;Yichen Wu;Yongbo Wang;Dong Wei;Xian Wu;Jianhua Ma;Yefeng Zheng","doi":"10.1109/TMI.2025.3534316","DOIUrl":"10.1109/TMI.2025.3534316","url":null,"abstract":"Against the metal artifact reduction (MAR) task in computed tomography (CT) imaging, most of the existing deep-learning-based approaches generally select a single Hounsfield unit (HU) window followed by a normalization operation to preprocess CT images. However, in practical clinical scenarios, different body tissues and organs are often inspected under varying window settings for good contrast. The methods trained on a fixed single window would lead to insufficient removal of metal artifacts when being transferred to deal with other windows. To alleviate this problem, few works have proposed to reconstruct the CT images under multiple-window configurations. Albeit achieving good reconstruction performance for different windows, they adopt to directly supervise each window learning in an equal weighting way based on the training set. To improve the learning flexibility and model generalizability, in this paper, we propose an adaptive weighting algorithm, called AdaW, for the multiple-window metal artifact reduction, which can be applied to different deep MAR network backbones. Specifically, we first formulate the multiple window learning task as a bi-level optimization problem. Then we derive an adaptive weighting optimization algorithm where the learning process for MAR under each window is automatically weighted via a learning-to-learn paradigm based on the training set and validation set. This rationality is finely substantiated through theoretical analysis. Based on different network backbones, experimental comparisons executed on five datasets with different body sites comprehensively validate the effectiveness of AdaW in helping improve the generalization performance as well as its good applicability. We will release the code at <uri>https://github.com/hongwang01/AdaW</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2408-2423"},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143418434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinru Zhang;Ni Ou;Berke Doga Basaran;Marco Visentin;Mengyun Qiao;Renyang Gu;Paul M. Matthews;Yaou Liu;Chuyang Ye;Wenjia Bai
{"title":"A Foundation Model for Lesion Segmentation on Brain MRI With Mixture of Modality Experts","authors":"Xinru Zhang;Ni Ou;Berke Doga Basaran;Marco Visentin;Mengyun Qiao;Renyang Gu;Paul M. Matthews;Yaou Liu;Chuyang Ye;Wenjia Bai","doi":"10.1109/TMI.2025.3540809","DOIUrl":"10.1109/TMI.2025.3540809","url":null,"abstract":"Brain lesion segmentation is crucial for neurological disease research and diagnosis. As different types of lesions exhibit distinct characteristics on different imaging modalities, segmentation methods are typically developed in a task-specific manner, where each segmentation model is tailored to a specific lesion type and modality. However, the use of task-specific models requires predetermination of the lesion type and imaging modality, which complicates their deployment in real-world scenarios. In this work, we propose a universal foundation model for brain lesion segmentation on magnetic resonance imaging (MRI), which can automatically segment different types of brain lesions given input of various MRI modalities. We develop a novel Mixture of Modality Experts (MoME) framework with multiple expert networks attending to different imaging modalities. A hierarchical gating network is proposed to combine the expert predictions and foster expertise collaboration. Moreover, to avoid the degeneration of each expert network, we introduce a curriculum learning strategy during training to preserve the specialisation of each expert. In addition to MoME, to handle the combination of multiple input modalities, we propose MoME+, which uses a soft dispatch network for input modality routing. We evaluated the proposed method on nine brain lesion datasets, encompassing five imaging modalities and eight lesion types. The results show that our model outperforms state-of-the-art universal models for brain lesion segmentation and achieves promising generalisation performance onto unseen datasets.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2594-2604"},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143393109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hariharan Ravishankar;Naveen Paluru;Prasad Sudhakar;Phaneendra K. Yalavarthy
{"title":"Information Geometric Approaches for Patient-Specific Test-Time Adaptation of Deep Learning Models for Semantic Segmentation","authors":"Hariharan Ravishankar;Naveen Paluru;Prasad Sudhakar;Phaneendra K. Yalavarthy","doi":"10.1109/TMI.2025.3540546","DOIUrl":"10.1109/TMI.2025.3540546","url":null,"abstract":"The test-time adaptation (TTA) of deep-learning-based semantic segmentation models, specific to individual patient data, was addressed in this study. The existing TTA methods in medical imaging are often unconstrained, require anatomical prior information or additional neural networks built during training phase, making them less practical, and prone to performance deterioration. In this study, a novel framework based on information geometric principles was proposed to achieve generic, off-the-shelf, regularized patient-specific adaptation of models during test-time. By considering the pre-trained model and the adapted models as part of statistical neuromanifolds, test-time adaptation was treated as constrained functional regularization using information geometric measures, leading to improved generalization and patient optimality. The efficacy of the proposed approach was shown on three challenging problems: 1) improving generalization of state-of-the-art models for segmenting COVID-19 anomalies in Computed Tomography (CT) images 2) cross-institutional brain tumor segmentation from magnetic resonance (MR) images, 3) segmentation of retinal layers in Optical Coherence Tomography (OCT) images. Further, it was demonstrated that robust patient-specific adaptation can be achieved without adding significant computational burden, making it first of its kind based on information geometric principles.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2553-2567"},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143393107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Score-Based Diffusion Models With Self-Supervised Learning for Accelerated 3D Multi-Contrast Cardiac MR Imaging","authors":"Yuanyuan Liu;Zhuo-Xu Cui;Shucong Qin;Congcong Liu;Hairong Zheng;Haifeng Wang;Yihang Zhou;Dong Liang;Yanjie Zhu","doi":"10.1109/TMI.2025.3534206","DOIUrl":"10.1109/TMI.2025.3534206","url":null,"abstract":"Long scan time significantly hinders the widespread applications of three-dimensional multi-contrast cardiac magnetic resonance (3D-MC-CMR) imaging. This study aims to accelerate 3D-MC-CMR acquisition by a novel method based on score-based diffusion models with self-supervised learning. Specifically, we first establish a mapping between the undersampled k-space measurements and the MR images, utilizing a self-supervised Bayesian reconstruction network. Secondly, we develop a joint score-based diffusion model on 3D-MC-CMR images to capture their inherent distribution. The 3D-MC-CMR images are finally reconstructed using the conditioned Langenvin Markov chain Monte Carlo sampling. This approach enables accurate reconstruction without fully sampled training data. Its performance was tested on the dataset acquired by a 3D joint myocardial <inline-formula> <tex-math>$ text {T}_{{1}}$ </tex-math></inline-formula> and <inline-formula> <tex-math>$ text {T}_{{1}rho }$ </tex-math></inline-formula> mapping sequence. The <inline-formula> <tex-math>$ text {T}_{{1}}$ </tex-math></inline-formula> and <inline-formula> <tex-math>$ text {T}_{{1}rho }$ </tex-math></inline-formula> maps were estimated via a dictionary matching method from the reconstructed images. Experimental results show that the proposed method outperforms traditional compressed sensing and existing self-supervised deep learning MRI reconstruction methods. It also achieves high quality <inline-formula> <tex-math>$ text {T}_{{1}}$ </tex-math></inline-formula> and <inline-formula> <tex-math>$ text {T}_{{1}rho }$ </tex-math></inline-formula> parametric maps close to the reference maps, even at a high acceleration rate of 14.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2436-2448"},"PeriodicalIF":0.0,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143071936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Radiologist-in-the-Loop Self-Training for Generalizable CT Metal Artifact Reduction","authors":"Chenglong Ma;Zilong Li;Yuanlin Li;Jing Han;Junping Zhang;Yi Zhang;Jiannan Liu;Hongming Shan","doi":"10.1109/TMI.2025.3535906","DOIUrl":"10.1109/TMI.2025.3535906","url":null,"abstract":"Metal artifacts in computed tomography (CT) images can significantly degrade image quality and impede accurate diagnosis. Supervised metal artifact reduction (MAR) methods, trained using simulated datasets, often struggle to perform well on real clinical CT images due to a substantial domain gap. Although state-of-the-art semi-supervised methods use pseudo ground-truths generated by a prior network to mitigate this issue, their reliance on a fixed prior limits both the quality and quantity of these pseudo ground-truths, introducing confirmation bias and reducing clinical applicability. To address these limitations, we propose a novel radiologist-in-the-loop self-training framework for MAR, termed RISE-MAR, which can integrate radiologists’ feedback into the semi-supervised learning process, progressively improving the quality and quantity of pseudo ground-truths for enhanced generalization on real clinical CT images. For quality assurance, we introduce a clinical quality assessor model that emulates radiologist evaluations, effectively selecting high-quality pseudo ground-truths for semi-supervised training. For quantity assurance, our self-training framework iteratively generates additional high-quality pseudo ground-truths, expanding the clinical dataset and further improving model generalization. Extensive experimental results on multiple clinical datasets demonstrate the superior generalization performance of our RISE-MAR over state-of-the-art methods, advancing the development of MAR models for practical application. The source code is available at <uri>https://github.com/Masaaki-75/rise-mar</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2504-2514"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingjian Li;Mingyuan Meng;Michael Fulham;David Dagan Feng;Lei Bi;Jinman Kim
{"title":"Enhancing Medical Vision-Language Contrastive Learning via Inter-Matching Relation Modeling","authors":"Mingjian Li;Mingyuan Meng;Michael Fulham;David Dagan Feng;Lei Bi;Jinman Kim","doi":"10.1109/TMI.2025.3534436","DOIUrl":"10.1109/TMI.2025.3534436","url":null,"abstract":"Medical image representations can be learned through medical vision-language contrastive learning (mVLCL) where medical imaging reports are used as weak supervision through image-text alignment. These learned image representations can be transferred to and benefit various downstream medical vision tasks such as disease classification and segmentation. Recent mVLCL methods attempt to align image sub-regions and the report keywords as local-matchings. However, these methods aggregate all local-matchings via simple pooling operations while ignoring the inherent relations between them. These methods therefore fail to reason between local-matchings that are semantically related, e.g., local-matchings that correspond to the disease word and the location word (semantic-relations), and also fail to differentiate such clinically important local-matchings from others that correspond to less meaningful words, e.g., conjunction words (importance-relations). Hence, we propose a mVLCL method that models the inter-matching relations between local-matchings via a relation-enhanced contrastive learning framework (RECLF). In RECLF, we introduce a semantic-relation reasoning module (SRM) and an importance-relation reasoning module (IRM) to enable more fine-grained report supervision for image representation learning. We evaluated our method using six public benchmark datasets on four downstream tasks, including segmentation, zero-shot classification, linear classification, and cross-modal retrieval. Our results demonstrated the superiority of our RECLF over the state-of-the-art mVLCL methods with consistent improvements across single-modal and cross-modal tasks. These results suggest that our RECLF, by modeling the inter-matching relations, can learn improved medical image representations with better generalization capabilities.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2463-2476"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Wu;E. Haneda;J. D. Pack;I. Heukensfeldt Jansen;A. Hsiao;E. McVeigh;B. De Man
{"title":"Cardiac Phase Estimation Using Deep Learning Analysis of Pulsed-Mode Projections: Toward Autonomous Cardiac CT Imaging","authors":"P. Wu;E. Haneda;J. D. Pack;I. Heukensfeldt Jansen;A. Hsiao;E. McVeigh;B. De Man","doi":"10.1109/TMI.2025.3536160","DOIUrl":"10.1109/TMI.2025.3536160","url":null,"abstract":"Cardiac CT plays an important role in diagnosing heart diseases but is conventionally limited by its complex workflow that requires dedicated phase and bolus tracking devices [e.g., electrocardiogram (ECG) gating]. This work reports first progress towards robust and autonomous cardiac CT exams through joint deep learning (DL) and analytical analysis of pulsed-mode projections (PMPs). To this end, cardiac phase and its uncertainty were simultaneously estimated using a novel projection domain cardiac phase estimation network (PhaseNet), which utilizes sliding-window multi-channel feature extraction strategy and a long short-term memory (LSTM) block to extract temporal correlation between time-distributed PMPs. An uncertainty-driven Viterbi (UDV) regularizer was developed to refine the DL estimations at each time point through dynamic programming. Stronger regularization was performed at time points where DL estimations have higher uncertainty. The performance of the proposed phase estimation pipeline was evaluated using accurate physics-based emulated data. PhaseNet achieved improved phase estimation accuracy compared to the competing methods in terms of RMSE (~50% improvement vs. standard CNN-LSTM; ~24% improvement vs. multi-channel residual network). The added UDV regularizer resulted in an additional ~14% improvement in RMSE, achieving accurate phase estimation with <6% RMSE in cardiac phase (phase ranges from 0-100%). To our knowledge, this is the first publication of prospective cardiac phase estimation in the projection domain. Combined with our previous work on PMP-based bolus curve estimation, the proposed method could potentially be used to achieve autonomous cardiac scanning without ECG device and expert-in-the-loop bolus timing.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2424-2435"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}