Sen Wang , Ying Zhao , Xiuding Cai , Nan Wang , Qinhe Zhang , Siyi Qi , Ziyao Yu , Ailian Liu , Yu Yao
{"title":"CMT-FFNet: A CMT-based feature-fusion network for predicting TACE treatment response in hepatocellular carcinoma","authors":"Sen Wang , Ying Zhao , Xiuding Cai , Nan Wang , Qinhe Zhang , Siyi Qi , Ziyao Yu , Ailian Liu , Yu Yao","doi":"10.1016/j.compmedimag.2025.102577","DOIUrl":"10.1016/j.compmedimag.2025.102577","url":null,"abstract":"<div><div>Accurately and preoperatively predicting tumor response to transarterial chemoembolization (TACE) treatment is crucial for individualized treatment decision-making hepatocellular carcinoma (HCC). In this study, we propose a novel feature fusion network based on the Convolutional Neural Networks Meet Vision Transformers (CMT) architecture, termed CMT-FFNet, to predict TACE efficacy using preoperative multiphase Magnetic Resonance Imaging (MRI) scans. The CMT-FFNet combines local feature extraction with global dependency modeling through attention mechanisms, enabling the extraction of complementary information from multiphase MRI data. Additionally, we introduce an orthogonality loss to optimize the fusion of imaging and clinical features, further enhancing the complementarity of cross-modal features. Moreover, visualization techniques were employed to highlight key regions contributing to model decisions. Extensive experiments were conducted to evaluate the effectiveness of the proposed modules and network architecture. Experimental results demonstrate that our model effectively captures latent correlations among features extracted from multiphase MRI data and multimodal inputs, significantly improving the prediction performance of TACE treatment response in HCC patients.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102577"},"PeriodicalIF":5.4,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144535589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"D2C-Morph: Brain regional segmentation based on unsupervised registration network with similarity analysis","authors":"Seunghyeon Han, Yoonguu Song, Boreom Lee","doi":"10.1016/j.compmedimag.2025.102589","DOIUrl":"10.1016/j.compmedimag.2025.102589","url":null,"abstract":"<div><div>Brain regional segmentation is an image-processing approach widely used in brain image analyses. Deep learning models that perform segmentation alone play an important role in medical fields such as automatic diagnosis and prognosis prediction. This method is effective for rapid diagnosis and large-scale processing. However, spatial alignment between image data is required for accurate segmentation. We proposed D2C-Morph, which can jointly perform registration and segmentation through unsupervised learning. The proposed model emphasizes the features of each input through a dual-path network and is designed to use contrastive learning twice. In addition, we demonstrated that the performance of the decoder can be improved by using a correlation feature map that enhances the similarity of the feature maps between two inputs through a correlation layer. Our study demonstrates that the deformation field of the registration network can be utilized for segmentation to jointly perform image processing pipelines.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102589"},"PeriodicalIF":5.4,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144522079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhuo Chen , Yixin Emu , Haiyang Chen , Zhihao Xue , Juan Gao , Fan Yang , Chenhao Gao , Xin Tang , Junpu Hu , Chenxi Hu
{"title":"Self-supervised suppression of MRI cardiac device artifacts based on multi-instance contrastive learning and anisotropic spatiotemporal transformer","authors":"Zhuo Chen , Yixin Emu , Haiyang Chen , Zhihao Xue , Juan Gao , Fan Yang , Chenhao Gao , Xin Tang , Junpu Hu , Chenxi Hu","doi":"10.1016/j.compmedimag.2025.102598","DOIUrl":"10.1016/j.compmedimag.2025.102598","url":null,"abstract":"<div><div>Cardiovascular implantable electronic devices (CIEDs) induce severe off-resonance artifacts in balanced steady-state free precession (bSSFP) cine MRI, limiting diagnostic utility for a growing patient population. While supervised and unpaired learning methods have shown promise for artifact suppression, their reliance on paired ground truth or artifact-free domains renders them clinically impractical for CIED imaging. To address this, we propose a self-supervised framework that integrates Noise2Noise, physics-driven multi-instance contrastive learning, and an anisotropic spatiotemporal transformer to eliminate the need for clean data. Central to our approach is the exploitation of bSSFP phase cycling’s linear combination property: multiple artifact-corrupted acquisitions with incremental RF phase shifts are leveraged as anatomically consistent \"pseudo-pairs.\" A novel multi-instance contrastive loss enforces consistency between artifact-suppressed outputs of these pairs, compensating for the finite-sample bias and spatially correlated artifacts that violate conventional Noise2Noise assumptions. Further, an anisotropic spatiotemporal transformer hierarchically models long-range dependencies using anisotropic spatial and spatiotemporal attention windows with a better alignment with cardiac anatomy, preserving myocardial texture and dynamic motion. Experiments on simulated and real CIED datasets demonstrate an improved performance relative to alternative methods. This work bridges the gap between idealized statistical learning and MRI physics, providing a feasible solution in real-world cardiac cine imaging when ground truth is inaccessible.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102598"},"PeriodicalIF":5.4,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144535587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheng Li , Kai Wang , Haitao Gan , Ran Zhou , Xinyao Cheng , Zhi Yang
{"title":"FeaCL: Carotid plaque classification from ultrasound images using feature-level and instance-level contrast learning","authors":"Cheng Li , Kai Wang , Haitao Gan , Ran Zhou , Xinyao Cheng , Zhi Yang","doi":"10.1016/j.compmedimag.2025.102590","DOIUrl":"10.1016/j.compmedimag.2025.102590","url":null,"abstract":"<div><div>The classification of carotid plaques from ultrasound images in clinical application is crucial for predicting patient risks of cardiovascular and cerebrovascular diseases, as well as for developing appropriate treatment strategies. Although the effectiveness of deep learning in this domain is well-established, its performance is often limited by the scarcity of labeled carotid plaque images. To address label scarcity, we present a novel self-supervised learning technique known as FEature-level and instAnce-level contrast learning (FeaCL) to enhance carotid plaque classification. FeaCL first utilizes a triplet network in the pretext task where the strong- and weak-augmentation approach is employed. The triplet network promotes the similarity of the three different views from both feature and instance perspectives to learn effective representation of carotid plaques. Then in the downstream task, the encoder network is initialized by the network trained in the pretext task, and updated using labeled ultrasound images. Experimental results on an ultrasound image dataset show that FeaCL achieved a classification accuracy of 83.4% with 30% of the training data, marking an improvement of 16.3% compared to the network without the pretext task. It is indicated that FeaCL can help clinicians diagnose the type of carotid plaque and evaluate the risk of the disease. The source code is available at: <span><span>https://github.com/a610lab/FeaCL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102590"},"PeriodicalIF":5.4,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144511076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A multimodal skin lesion classification through cross-attention fusion and collaborative edge computing","authors":"Nhu-Y Tran-Van, Kim-Hung Le","doi":"10.1016/j.compmedimag.2025.102588","DOIUrl":"10.1016/j.compmedimag.2025.102588","url":null,"abstract":"<div><div>Skin cancer is a significant global health concern requiring early and accurate diagnosis to improve patient outcomes. While deep learning-based computer-aided diagnosis (CAD) systems have emerged as effective diagnostic support tools, they often face three key limitations: low diagnostic accuracy due to reliance on single-modality data (e.g., dermoscopic images), high network latency in cloud deployments, and privacy risks from transmitting sensitive medical data to centralized servers. To overcome these limitations, we propose a unified solution that integrates a multimodal deep learning model with a collaborative inference scheme for skin lesion classification. Our model enhances diagnostic accuracy by fusing dermoscopic images with patient metadata via a novel cross-attention-based feature fusion mechanism. Meanwhile, the collaborative scheme distributes computational tasks across IoT and edge devices, reducing latency and enhancing data privacy by processing sensitive information locally. Our experiments on multiple benchmark datasets demonstrate the effectiveness of this approach and its generalizability, such as achieving a classification accuracy of 95.73% on the HAM10000 dataset, outperforming competitors. Furthermore, the collaborative inference scheme significantly improves efficiency, achieving latency speedups of up to 20% and 47% over device-only and edge-only schemes.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102588"},"PeriodicalIF":5.4,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144470847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marcel Beetz , Abhirup Banerjee , Lei Li , Julia Camps , Blanca Rodriguez , Vicente Grau
{"title":"3D cardiac shape analysis with variational point cloud autoencoders for myocardial infarction prediction and virtual heart synthesis","authors":"Marcel Beetz , Abhirup Banerjee , Lei Li , Julia Camps , Blanca Rodriguez , Vicente Grau","doi":"10.1016/j.compmedimag.2025.102587","DOIUrl":"10.1016/j.compmedimag.2025.102587","url":null,"abstract":"<div><div>Cardiac anatomy and physiology vary considerably across the human population. Understanding and taking into account this variability is crucial for both accurate clinical decision-making and realistic <em>in silico</em> modeling of cardiac function. In this work, we propose multi-class variational point cloud autoencoders (Point VAE) as a novel geometric deep learning approach for 3D cardiac shape and function analysis. Its encoder–decoder architecture enables efficient multi-scale feature learning directly on high resolution point cloud representations of the multi-class 3D cardiac anatomy and can capture complex non-linear 3D shape variability in a low-dimensional and interpretable latent space. We first evaluate the Point VAE’s reconstruction ability on a dataset of over 10,000 subjects and find mean Chamfer distances between input and reconstructed point clouds below the pixel resolution of the underlying image acquisitions. Furthermore, we analyze the Point VAE’s latent space and observe a realistic and disentangled representation of morphological and functional variability. We test the latent space for pathology prediction and find it to outperform clinical benchmarks by 13% and 16% in area under the receiver operating characteristic (AUROC) curves for the tasks of prevalent myocardial infarction (MI) detection and incident MI prediction, respectively, and by 10% in terms of Harrell’s concordance index for MI survival analysis. Finally, we use the generated populations for <em>in silico</em> simulations of cardiac electrophysiology, demonstrating its ability to introduce realistic natural variability.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102587"},"PeriodicalIF":5.4,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144510978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LoRA-Enhanced RT-DETR: First Low-Rank Adaptation based DETR for real-time full body anatomical structures identification in musculoskeletal ultrasound","authors":"Jyun-Ping Kao , Yu-Ching Chung , Hao-Yu Hung , Chun-Ping Chen , Wen-Shiang Chen","doi":"10.1016/j.compmedimag.2025.102583","DOIUrl":"10.1016/j.compmedimag.2025.102583","url":null,"abstract":"<div><div>Medical imaging models for object identification often rely on extensive pretraining data, which is difficult to obtain due to data scarcity and privacy constraints. In practice, hospitals typically have access only to pretrained model weights without the original training data limiting their ability to tailor models to specific patient populations and imaging devices. We address this challenge with the first Low-Rank Adaptation (LoRA)-enhanced Real-Time Detection Transformer (RT-DETR) model for full body musculoskeletal (MSK) ultrasound (US). By injecting LoRA modules into select encoder and decoder layers of RT-DETR, we achieved a 99.45 % (RT-DETR-L) and 99.68 % (RT-DETR-X) reduction in trainable parameters while preserving the model’s representational power. This extreme reduction enables efficient fine-tuning using only minimal institution-specific data and maintains robust performance even on anatomical structures absent from the fine-tuning set. In extensive 5-fold cross-validation, our LoRA-enhanced model outperformed traditional full-model fine-tuning and maintained or improved detection accuracy across a wide range of MSK structures while demonstrating strong resilience to domain shifts. The proposed LoRA-enhanced RT-DETR significantly lowers the barrier for deploying transformer-based detection in clinics, offering a privacy-conscious, computationally lightweight solution for real-time, full-body MSK US identification.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102583"},"PeriodicalIF":5.4,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144470848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sheng Miao , Dezhen Wang , Xiaonan Yang , Zitong Liu , Xiang Shen , Dapeng Hao , Chuanli Zhou , Jiufa Cui
{"title":"Dual Stream Feature Fusion 3D Network for supraspinatus tendon tear classification","authors":"Sheng Miao , Dezhen Wang , Xiaonan Yang , Zitong Liu , Xiang Shen , Dapeng Hao , Chuanli Zhou , Jiufa Cui","doi":"10.1016/j.compmedimag.2025.102580","DOIUrl":"10.1016/j.compmedimag.2025.102580","url":null,"abstract":"<div><div>The classification of medical images is of significant importance for computer-aided diagnosis. Supraspinatus tendon tear is a common clinical condition. Classifying the severity of supraspinatus tendon tears accurately aids in the selection of surgical techniques and postoperative rehabilitation. While some studies have classified supraspinatus tendon tears, existing methods lack detailed classification. Inaccurate and insufficiently detailed classification can lead to errors in the selection of surgical techniques, thereby affecting patient treatment and rehabilitation. In addition, the computational complexity of traditional 3D classification models is too high. In this study, we conducted a detailed 6-class classification of the supraspinatus tendon tears for the first time. We propose a novel 3D model for classifying supraspinatus tendon tears, the Dual Stream Feature Fusion 3D Network (DSFF-3DNet). To accelerate the extraction of the Region of Interest (ROI), we trained the Yolov9 model to identify the supraspinatus tendon and save the Yolo label. DSFF-3DNet comprises three stages: feature extraction, feature enhancement, and classification. We performed data augmentation, training, validation and internal testing on a dataset with 1014 patients, and tested on two independent external test sets. DSFF-3DNet achieved AUCs of 97.88, 88.06, and 84.47 on the internal test set and the two external test sets, respectively, surpassing the best-performing traditional models on these three test sets by 3.51%, 9.25%, and 9.38% across these test sets. Ablation experiments demonstrated the individual contributions of each module in DSFF-3DNet, and significance difference tests showed that the performance improvements were statistically significant (p<span><math><mo><</mo></math></span>0.05).</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102580"},"PeriodicalIF":5.4,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Lin , Jin-mei Zheng , Chang-wen Liu , Quan-quan Tang , Jin-shu Pang , Qiong Qin , Zhen-hu Lin , Hong Yang
{"title":"Radiogenomic insights suggest that multiscale tumor heterogeneity is associated with interpretable radiomic features and outcomes in cancer patients","authors":"Peng Lin , Jin-mei Zheng , Chang-wen Liu , Quan-quan Tang , Jin-shu Pang , Qiong Qin , Zhen-hu Lin , Hong Yang","doi":"10.1016/j.compmedimag.2025.102586","DOIUrl":"10.1016/j.compmedimag.2025.102586","url":null,"abstract":"<div><h3>Background:</h3><div>To develop radiogenomic subtypes and determine the relationships between radiomic phenotypes and multiomics molecular characteristics.</div></div><div><h3>Materials and Methods:</h3><div>In this retrospective multicohort analysis, we divided patients into different subgroups based on multiomics features. This unsupervised subtyping process was performed by integrating 10 unsupervised machine learning algorithms. We compared the variations in clinicopathological, radiomic, genomic, and transcriptomic features across different subgroups. Based on the key radiomic features of subtypes, overall survival (OS) prediction models were developed and validated by using 10 supervised machine learning algorithms. Model performance was evaluated by using the C-index and log-rank test.</div></div><div><h3>Results:</h3><div>This study included 2,281 patients (mean age, 63 years ±13 [SD]; 660 females, 1,621 males) for analysis. Patients were divided into four subgroups on the basis of radiogenomic data. Significant differences in OS were observed among the subgroups. Subtypes were significantly different when radiomic phenotypes, gene mutation status and transcriptomic pathway alterations were considered. Among the 24 radiomic features important for subtyping, 9 were closely associated with OS. Machine learning algorithms were used to develop prognostic models and showed moderate OS prediction performance in the training (log-rank <span><math><mrow><mi>P</mi><mo><</mo><mn>0</mn><mo>.</mo><mn>001</mn></mrow></math></span>) and test (log-rank <span><math><mrow><mi>P</mi><mo><</mo><mn>0</mn><mo>.</mo><mn>001</mn></mrow></math></span>) cohorts. Tumor molecular heterogeneity is also closely related to the radiomic phenotype.</div></div><div><h3>Conclusions:</h3><div>Biologically interpretable radiomic features provide an effective and novel algorithm for tumor molecular capture and risk stratification.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102586"},"PeriodicalIF":5.4,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144331331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MDEANet: A multi-scale deep enhanced attention net for popliteal fossa segmentation in ultrasound images","authors":"Fangfang Chen , Wei Fang , Qinghua Wu , Miao Zhou , Wenhui Guo , Liangqing Lin , Zhanheng Chen , Zui Zou","doi":"10.1016/j.compmedimag.2025.102570","DOIUrl":"10.1016/j.compmedimag.2025.102570","url":null,"abstract":"<div><div>Popliteal sciatic nerve block is a widely used technique for lower limb anesthesia. However, despite ultrasound guidance, the complex anatomical structures of the popliteal fossa can present challenges, potentially leading to complications. To accurately identify the bifurcation of the sciatic nerve for nerve blockade, we propose MDEANet, a deep learning-based segmentation network designed for the precise localization of nerves, muscles, and arteries in ultrasound images of the popliteal region. MDEANet incorporates Cascaded Multi-scale Atrous Convolutions (CMAC) to enhance multi-scale feature extraction, Enhanced Spatial Attention Mechanism (ESAM) to focus on key anatomical regions, and Cross-level Feature Fusion (CLFF) to improve contextual representation. This integration markedly improves segmentation of nerves, muscles, and arteries. Experimental results demonstrate that MDEANet achieves an average Intersection over Union (IoU) of 88.60% and a Dice coefficient of 93.95% across all target structures, outperforming state-of-the-art models by 1.68% in IoU and 1.66% in Dice coefficient. Specifically, for nerve segmentation, the Dice coefficient reaches 93.31%, underscoring the effectiveness of our approach. MDEANet has the potential to provide decision-support assistance for anesthesiologists, thereby enhancing the accuracy and efficiency of ultrasound-guided nerve blockade procedures.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102570"},"PeriodicalIF":5.4,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144331422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}