Chen Minghui, Xu Shiyi, Zhou Jing, Wang Hongwang, Shao Yi
{"title":"An Enhanced Double-Unet Segmentation Framework for the Early Detection of Diabetic Retinopathy","authors":"Chen Minghui, Xu Shiyi, Zhou Jing, Wang Hongwang, Shao Yi","doi":"10.1002/ima.70096","DOIUrl":"https://doi.org/10.1002/ima.70096","url":null,"abstract":"<div>\u0000 \u0000 <p>A segmentation method for diabetic retinopathy images utilizing an enhanced Double-Unet can assist clinicians in analyzing the length, curvature, and branching angles of blood vessels, among other parameters, to facilitate the early detection of diabetic retinopathy. Thin blood veins have little contrast, which makes it simple to lose spatial information. By adding the attention gates of fusion channel attention and spatial attention in the decoder section, the enhanced Double-Unet network highlights the vascular features and recovers the features lost during the coding stage. Additionally, multi-scale context information may be efficiently extracted, and blood vessel feature information can be enhanced by substituting the Dense Atrous Spatial Pyramid Pooling (Dense ASPP) module for the ASPP module. The proposed method was assessed using retinal vascular datasets (DRIVE, CHASEDB1, STARE, HRF) and fundus images from 40 patients at a leading hospital in Fujian Province of China. The results show that the present method is fast and has high accuracy and can achieve high accuracy, recall, and <i>F</i>1 scores on most of the above datasets with fewer parameters. The segmentation results offer a solid foundation for more vascular reconstruction research while also successfully overcoming the interference of thin vessels and low-contrast textures.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143888922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FPUNet: A Multi-Level Residual Fractional Domain Transformer Network for Ischemic Stroke Image Segmentation","authors":"Zhongxia Tan, Chen Huang, Hongqing Zhu, Cuiling Jiang, Yongjing Wan, Bingcang Huang","doi":"10.1002/ima.70095","DOIUrl":"https://doi.org/10.1002/ima.70095","url":null,"abstract":"<div>\u0000 \u0000 <p>Due to the fact that ischemic stroke patients comprise 60%–70% of all stroke cases, coupled with the long examination time and narrow treatment window, along with the high requirement for clinicians' experience, an accurate and rapid ischemic stroke lesion segmentation algorithm can provide clinicians with valuable assistance in the diagnosis and treatment of stroke patients, which is of great clinical significance. This paper proposes a Fractional Perspective U-Net (FPUNet), which offers a novel perspective for observing lesion features between the spatial and frequency domains, allowing for a more prominent examination of these features. Traditional spatial or frequency domain analysis restricts the observation of signals to two separate angles, making it difficult to simultaneously analyze from both perspectives; this can lead to the oversight of important signal characteristics. In contrast, the fractional domain offers a balance between time and frequency, facilitating the analysis of signals across different scales. This multi-scale perspective enables the capture of details that may be overlooked in pure time or frequency domains. It allows for a more effective extraction of details and texture information from medical images, thereby accurately delineating the edges of stroke regions and providing clearer boundaries for pathological areas, improving the separation of lesions from the background. FPUNet is designed with a multi-level residual structure incorporating a multi-head attention mechanism based on the fractional domain, alongside a variant of convolutional neural network whose layers are tailored to the number of feature map channels for effective channel feature extraction. This innovative approach aims to address the challenges posed by the intricate nature of stroke, ultimately assisting clinicians in the diagnosis and treatment of stroke patients. The proposed method demonstrates superior performance over state-of-the-art models in both accuracy and segmentation efficacy, achieving Dice coefficients of 64.36%, 63.02%, and 86.11% on the AISD, ATLASv2.0, and ISLES22 datasets, respectively.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143879776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vishal Srivastava, Akshaya Prabhu, Sravya Nedungatt, K. Vibha Damodara, Shyam Lal, Jyoti Kini
{"title":"An Efficient Parallel Branch Network for Multi-Class Classification of Prostate Cancer From Histopathological Images","authors":"Vishal Srivastava, Akshaya Prabhu, Sravya Nedungatt, K. Vibha Damodara, Shyam Lal, Jyoti Kini","doi":"10.1002/ima.70092","DOIUrl":"https://doi.org/10.1002/ima.70092","url":null,"abstract":"<div>\u0000 \u0000 <p>Prostate cancer is one of the prevalent forms of cancer, posing a significant health concern for men. Accurate detection and classification of prostate cancer are crucial for effective diagnosis and treatment planning. Histopathological images play a pivotal role in identifying prostate cancer by enabling pathologists to identify cellular abnormalities and tumor characteristics. With the rapid advancements in deep learning, Convolutional Neural Networks (CNNs) have emerged as a powerful tool for tackling complex computer vision tasks, including object detection, classification, and segmentation. This paper proposes a Parallel Branch Network (PBN), a CNN architecture specifically designed for the automatic classification of prostate cancer into its subtypes from histopathological images. The paper introduces a novel Efficient Residual (ER) block that enhances feature representation using residual learning and multi-scale feature extraction. By utilizing multiple branches with different filter reduction ratios and dense attention mechanisms, the block captures diverse features while preserving essential information. The proposed PBN model achieved a classification accuracy of 93.16% on the Prostate Gleason dataset, outperforming all other comparison models.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143879777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xutao Guo, Chenfei Ye, Mingkai Zhang, Xingyu Hao, Yanwu Yang, Yue Yu, Ting Ma, Ying Han
{"title":"A Structure-Preserving Denoising Diffusion Model for AV45 PET Quantification Without MRI in Alzheimer's Disease Diagnosis","authors":"Xutao Guo, Chenfei Ye, Mingkai Zhang, Xingyu Hao, Yanwu Yang, Yue Yu, Ting Ma, Ying Han","doi":"10.1002/ima.70074","DOIUrl":"https://doi.org/10.1002/ima.70074","url":null,"abstract":"<div>\u0000 \u0000 <p>Lack of early diagnosis often results in patients with Alzheimer progressing to irreversible mild to moderate cognitive impairment without timely treatment. The deposition of amyloid-<i>β</i> (A<i>β</i>) in the cerebral cortex, a definitive biomarker for Alzheimer's disease, is detectable through AV45 PET scans, facilitating early diagnosis of the condition. Clinically, accurate quantification of AV45 PET scans necessitate T1 images. However, the prevalent use of PET-CT over PET-MRI equipment entails additional MRI scans, leading to increased costs and patient burden. To address this clinical challenge, this paper proposes the structure-preserving denoising diffusion probabilistic model (SP-DDPM), capable of synthesizing the T1 images from AV45 PET scans. In the SP-DDPM, structural details from T1 images are incorporated into the diffusion model to emphasize anatomical accuracy. We also enhance the model's learning for the targeted brain areas using segmentation-based priors. Moreover, an exponential cosine noise strategy is proposed to improve the model's suitability for generating T1 images. In this study, we incorporated a large-scale cohort of 667 subjects from the ADNI and SILCODE databases to train and validate our models. The MR images generated from AV45 PET demonstrated similar signal patterns to real MR images. The average absolute error of the cortical composite region SUVR, estimated using our method, was 0.018 for the ADNI dataset and 0.041 for the SILCODE dataset, outperforming current techniques. The MR images generated by the SP-DDPM serve as an accurate template for amyloid quantification, facilitating precise AV45 PET scan quantification in the absence of real MR images. The application of this method is poised to streamline the diagnostic workflow for Alzheimer's disease, increase clinical work efficiency, and alleviate patient burden.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143861956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Joint Lightweight U-Shaped Network for Efficient Medical Image Segmentation of Melanoma and Breast Cancer","authors":"Ting Ma, Jilong Liao, Feng Hu, Maode Ma, Ke Wang","doi":"10.1002/ima.70087","DOIUrl":"https://doi.org/10.1002/ima.70087","url":null,"abstract":"<div>\u0000 \u0000 <p>With the continuous development of deep learning, U-Net networks, as an encoder-decoder U-shaped network architecture based on skip connections, have become a popular structure for various medical image segmentation applications in recent years. However, traditional medical segmentation networks face severe challenges when dealing with complex scenarios such as dermoscopy images of melanoma and breast ultrasound images. These challenges primarily stem from limitations in semantic understanding and the complexity of lesion morphology, leading to difficulties in accurately identifying and segmenting lesion structures with irregular shapes and blurred boundaries with surrounding tissues. Additionally, the prevalent issues of parameter redundancy and computational inefficiency in network structures further constrain their potential applications in clinical practice. To address these issues, this paper proposes an image segmentation network based on dynamic skip connections and convolutional multilayer perceptrons—the Joint Lightweight U-shaped Network. JLU-Net, founded on the concept of “joint,” incorporates a joint non-uniform downsampling module that combines linear pooling with nonlinear convolutional downsampling to achieve lightweight modeling. Furthermore, to resolve the semantic gap problem, JLU-Net adopts an enhanced kernel convolution module, which strengthens target region features through feature recalibration operations while integrating detailed and global information. It also includes a joint squeeze attention module, which processes wide and narrow, global and local features simultaneously through squeeze axial operations, thereby enhancing global information exchange. Extensive experiments demonstrate that our JLU-Net achieves state-of-the-art performance across various environments while requiring only 0.29M parameters and 0.52 GFLOPs.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143846032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Generative Adversarial Network Based on Conditional VAE for EIT Reconstruction of Lung Diseases","authors":"Yueyue Xiao, Jie Yu, Chunxiao Chen, Liang Wang, Songpei Hu, Bokai Chen, Hao Yu","doi":"10.1002/ima.70089","DOIUrl":"https://doi.org/10.1002/ima.70089","url":null,"abstract":"<div>\u0000 \u0000 <p>Electrical impedance tomography (EIT), a non-invasive, real-time, and cost-effective imaging technique, is widely studied in medical diagnostics for lung diseases. However, the severely ill-posed nonlinear inverse problem in EIT leads to reconstructed images being susceptible to noise-induced artifacts. This study aims to advance a deep learning technique to reconstruct high-resolution conductivity distributions using voltages measured by EIT sensors. We proposed a novel reconstruction algorithm called generative adversarial network based on conditional variational autoencoder (CVAE-GAN). We incorporated the true conductivity as a conditional variable into the latent representation of the variational autoencoder (VAE) decoder and encoder to form a conditional variational autoencoder (CVAE). A residual module was introduced into the CVAE decoder and encoder to facilitate the network in learning deeper feature representations, which improves the performance of the model. The adversarial learning strategy leverages the improved CVAE as the generator in a GAN framework, substantially enhancing the accuracy and robustness of the reconstructed images. Experimental results demonstrate that CVAE-GAN outperforms five state-of-the-art deep learning methods. Compared to the best alternative model, it achieves an 8.9% improvement in peak signal-to-noise ratio (PSNR) and a 3.2% improvement in structural similarity index (SSIM), while reducing mean squared error (MSE) by 33.33% and relative error (RE) by 24.57%. These results highlight the significant performance gains in terms of both accuracy and robustness for EIT image reconstruction. The proposed CVAE-GAN framework represents a significant advancement in EIT image reconstruction. By addressing key challenges such as noise-induced artifacts and achieving robust reconstructions, it provides a generalizable approach with transformative potential for real-world applications in medical imaging, particularly in the diagnostics and monitoring of lung diseases.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143846031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MGMFormer: Multi-Scale Attentional Medical Image Segmentation Network for Semantic Feature Enhancement","authors":"Yuanbin Wang, Yunbo Shi, Rui Zhao, Yunan Chen, Xingqiao Ren, Binghong Xing","doi":"10.1002/ima.70086","DOIUrl":"https://doi.org/10.1002/ima.70086","url":null,"abstract":"<div>\u0000 \u0000 <p>Multi-scale feature extraction is important for the accurate segmentation of different lesion areas. In order to solve the problem of false cut and missing cut in practical applications due to the difficulty of extracting semantic feature information from existing technologies, we proposed a multi-scale attention network framework based on semantic feature enhancement, MGMFormer. Taking advantage of multi-scale feature extraction and attention mechanism to enhance semantic features, the encoder and decoder are composed of joint learning, multi-scale arbitrary sampling, and global adaptive calibration modules. It makes the encoder more focused on the fine structure, so as to effectively deal with the problem of reduced accuracy caused by modal heterogeneity. At the same time, it solves the problem of lack of feature expression ability when the decoder deals with complex texture information. We evaluated the segmentation performance of MGMFormer on eight different datasets, BraTS, Sypanse, ACDC, ISIC, Kvasir-SEG, CAMUS, CHNCXR, and Glas, and in particular, it outperformed most existing algorithms.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143846186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ensemble Architecture of Vision Transformer and CNNs for Breast Cancer Tumor Detection From Mammograms","authors":"Saif Ur Rehman Khan, Sohaib Asif, Omair Bilal","doi":"10.1002/ima.70090","DOIUrl":"https://doi.org/10.1002/ima.70090","url":null,"abstract":"<div>\u0000 \u0000 <p>Addressing the complexities of classifying distinct object classes in computer vision presents several challenges, including effectively capturing features such as color, form, and tissue size for each class, correlating class vulnerabilities, singly capturing features, and predicting class labels accurately. To tackle these issues, we introduce a novel hybrid deep dense learning technique that combines deep transfer learning with a transformer architecture. Our approach utilizes ResNet50, EfficientNetB1, and our proposed ProDense block as the backbone models. By integrating the Vit-L16 transformer, we can focus on relevant features in mammography and extract high-value pair features, offering two alternative methods for feature extraction. This allows our model to adaptively shift the region of interest towards the class type in slides. The transformer architecture, particularly Vit-L16, enhances feature extraction by efficiently capturing long-range dependencies in the data, enabling the model to better understand the context and relationships between features. This aids in more accurate classification, especially when fine-tuning pretrained models, as it helps the model adapt to specific characteristics of the target dataset while retaining valuable information learned from the pretraining phase. Furthermore, we employ a stack ensemble technique to leverage both the deep transfer learning model and the ProDense block extension for training extensive features for breast cancer classification. The fine-tuning process employed by our hybrid model helps refine the dense layers, enhancing classification accuracy. Evaluating our method on the INbreast dataset, we observe a significant improvement in predicting the binary cancer category, outperforming the current state-of-the-art classifier by 98.08% in terms of accuracy.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143846185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Federated Transfer Learning for Lung Disease Detection","authors":"Shrey Sumariya, Shreyas Rami, Shubham Revadekar, Chetashri Bhadane","doi":"10.1002/ima.70080","DOIUrl":"https://doi.org/10.1002/ima.70080","url":null,"abstract":"<div>\u0000 \u0000 <p>Detecting lung disease traditionally relied on the expertise of doctors and medical practitioners. However, advancements in Artificial Intelligence have revolutionized this process by utilizing machine learning and deep learning algorithms to analyze X-ray and CT scan data. Despite the potential of these technologies, the use of private patient data for training models poses significant privacy concerns, as hospitals are reluctant to share such sensitive information. To address this issue, this paper presents a decentralized approach using Federated Learning, which secures patient data while overcoming the limitations of centralized data collection and storage. We propose a Federated Transfer Learning system that allows for effective model training without centralizing sensitive data. This approach leverages the decentralized nature of federated learning and the efficiency of transfer learning, enabling us to train models with limited data from each hospital while minimizing computing costs. We evaluated four methodologies—centralized, federated, transfer learning, and federated transfer learning—to determine their effectiveness in classifying lung diseases. Our findings demonstrate that Federated Transfer Learning is the most effective method, as it preserves user privacy by training models directly on client devices and achieves high accuracy. Specifically, the ResNet-50 model yielded the highest performance, with accuracies of 87.95%, 88.04%, 87.55%, and 89.96% for the centralized, transfer, federated, and federated transfer learning approaches, respectively. This study underscores the potential of Federated Transfer Learning to enhance both the accuracy of disease classification and the protection of patient privacy in medical applications.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143840810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Diff-CFFBNet: Diffusion-Embedded Cross-Layer Feature Fusion Bridge Network for Brain Tumor Segmentation","authors":"Xiaosheng Wu, Qingyi Hou, Chaosheng Tang, Shuihua Wang, Junding Sun, Yudong Zhang","doi":"10.1002/ima.70088","DOIUrl":"https://doi.org/10.1002/ima.70088","url":null,"abstract":"<div>\u0000 \u0000 <p>This study introduces the Diff-CFFBNet, a novel network for brain tumor segmentation designed to address the challenges of misdetection in broken tumor regions within MRI scans, which is crucial for early diagnosis, treatment planning, and disease monitoring. The proposed method incorporates a cross-layer feature fusion bridge (CFFB) to enhance feature interaction and a cross-layer feature fusion U-Net (CFFU-Net) to reduce the semantic gap in diffusion models. Additionally, a sampling-quantity-based fusion (SQ-Fusion) is utilized to leverage the uncertainty of diffusion models for improved segmentation outcomes. Experimental validation on BraTS 2019, BraTS 2020, TCGA-GBM, TCGA-LGG, and MSD datasets demonstrates that Diff-CFFBNet outperforms existing methods, achieving superior performance in terms of Dice score, HD95, and mIoU metrics. These results indicate the model's robustness and precision, even under challenging conditions with complex tumor structures. Diff-CFFBNet provides a reliable solution for accurate and efficient brain tumor segmentation in medical imaging, with the potential for clinical application in treatment planning and disease monitoring. Future work aims to extend this approach to multiple tumor types and refine diffusion model applications in medical image segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143830995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}