{"title":"Pancreatic Tumor Detection From CT Images Converted to Graphs Using Whale Optimization and Classification Algorithms With Transfer Learning","authors":"Yusuf Alaca, Ömer Faruk Akmeşe","doi":"10.1002/ima.70040","DOIUrl":"https://doi.org/10.1002/ima.70040","url":null,"abstract":"<p>Pancreatic cancer is one of the most aggressive types of cancer, known for its high mortality rate, as it is often diagnosed at an advanced stage. Early diagnosis holds the potential to prolong patients' lifespans and improve treatment success rates. In this study, an innovative method is proposed to enhance the diagnosis of pancreatic cancer. Computed tomography (CT) images were converted into graphs using the Harris Corner Detection Algorithm and analyzed using deep learning models via transfer learning. DenseNet121 and InceptionV3 transfer learning models were trained on graph-based data, and model parameters were optimized using the Whale Optimization Algorithm (WOA). Additionally, classification algorithms such as k-Nearest Neighbors (k-NN), Support Vector Machines (SVM), and Random Forests (RF) were integrated into the analysis of the extracted features. The best results were achieved using the k-NN classification algorithm on features optimized by WOA, yielding an accuracy of 92.10% and an F1 score of 92.74%. The study demonstrated that graph-based transformation enabled more effective modeling of spatial relationships, thereby enhancing the performance of deep learning models. WOA offered significant superiority compared to other methods in parameter optimization. This study aims to contribute to the development of a reliable diagnostic system that can be integrated into clinical applications. In the future, the use of larger and more diverse datasets, along with different graph-based methods, could enhance the generalizability and performance of the proposed approach. The proposed model has the potential to serve as a decision support tool for physicians, particularly in early diagnosis, offering an opportunity to improve patients' quality of life.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification of Hepatic Nodules Using an Improved WOA-SVM Radiomics Model","authors":"Haoyun Sun, Lijia Wang","doi":"10.1002/ima.70036","DOIUrl":"https://doi.org/10.1002/ima.70036","url":null,"abstract":"<div>\u0000 \u0000 <p>The incidence and mortality of liver cancer in China are not optimistic. Early diagnosis and treatment have become the urgent means to solve this situation. To develop an improved radiomics model for the classification of hepatic nodules based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The DCE-MRI images of 30 hepatitis, 30 cirrhotic nodules (CN), 30 dysplastic nodules (DN), and 30 hepatocellular carcinoma (HCC) patients were retrospectively and randomly divided into training and testing datasets in a 7:3 ratio. Firstly, the radiomics features of lesions were extracted by using feature extractor module based on Pyradiomics, from which optimal features were selected by least absolute shrinkage and selection operator (LASSO). Then, the improved whale optimization algorithm (WOA) with Tent mapping, Adaptive weight, and Levy flight (TALWOA) was used for parameter optimization of support vector machines (SVM). Finally, TALWOA-SVM was employed for the four-class classification of hepatic nodules. Receiver operating characteristic (ROC) curves, area under curve (AUC), and F1-score were used to evaluate the performance of the TALWOA-SVM model. Forty-four most informative features were selected from 851 features to train the SVM classifier. Compared with the standard whale algorithm and other optimization algorithms, the optimized model proposed in this paper has highest classification accuracy (81.315%), the ROC of each category being closer to the top left corner with AUC were 0.9378 (95% CI: 0.893–0.981), 0.9223 (95% CI: 0.873–0.971), 0.9794 (0.958–1.000), 0.9872 (0.971–1.000). The model proposed in this study can better classify hepatic nodules in different periods, and is expected to provide help for the early diagnosis of liver cancer.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Khaled ELKarazle, Valliappan Raman, Caslon Chua, Patrick Then
{"title":"SCABNet: A Novel Polyp Segmentation Network With Spatial-Gradient Attention and Channel Prioritization","authors":"Khaled ELKarazle, Valliappan Raman, Caslon Chua, Patrick Then","doi":"10.1002/ima.70039","DOIUrl":"https://doi.org/10.1002/ima.70039","url":null,"abstract":"<p>Current colorectal polyps detection methods often struggle with efficiency and boundary precision, especially when dealing with polyps of complex shapes and sizes. Traditional techniques may fail to precisely define the boundaries of these polyps, leading to suboptimal detection rates. Furthermore, flat and small polyps often blend into the background due to their low contrast against the mucosal wall, making them even more challenging to detect. To address these challenges, we introduce SCABNet, a novel deep learning architecture for the efficient detection of colorectal polyps. SCABNet employs an encoder-decoder structure with three novel blocks: the Feature Enhancement Block (FEB), the Channel Prioritization Block (CPB), and the Spatial-Gradient Boundary Attention Block (SGBAB). The FEB applies dilation and spatial attention to high-level features, enhancing their discriminative power and improving the model's ability to capture complex patterns. The CPB, an efficient alternative to traditional channel attention blocks, assigns prioritization weights to diverse feature channels. The SGBAB replaces conventional boundary attention mechanisms with a more efficient solution that focuses on the spatial attention of the feature map. It employs a Jacobian-based approach to construct learned convolutions on both vertical and horizontal components of the feature map. This allows the SGBAB to effectively understand the changes in the feature map across different spatial locations, which is crucial for detecting the boundaries of complex-shaped polyps. These blocks are strategically embedded within the network's skip connections, enhancing the model's boundary detection capabilities without imposing excessive computational demands. They exploit and enhance features at three levels: high, mid, and low, thereby ensuring the detection of a wide range of polyps. SCABNet has been trained on the Kvasir-SEG and CVC-ClinicDB datasets and evaluated on multiple datasets, demonstrating superior results. The code is available on: https://github.com/KhaledELKarazle97/SCABNet.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70039","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anass Garbaz, Yassine Oukdach, Said Charfi, Mohamed El Ansari, Lahcen Koutti, Mouna Salihoun, Samira Lafraxo
{"title":"MLFE-UNet: Multi-Level Feature Extraction Transformer-Based UNet for Gastrointestinal Disease Segmentation","authors":"Anass Garbaz, Yassine Oukdach, Said Charfi, Mohamed El Ansari, Lahcen Koutti, Mouna Salihoun, Samira Lafraxo","doi":"10.1002/ima.70030","DOIUrl":"https://doi.org/10.1002/ima.70030","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurately segmenting gastrointestinal (GI) disease regions from Wireless Capsule Endoscopy images is essential for clinical diagnosis and survival prediction. However, challenges arise due to similar intensity distributions, variable lesion shapes, and fuzzy boundaries. In this paper, we propose MLFE-UNet, an advanced fusion of CNN-based transformers with UNet. Both the encoder and decoder utilize a multi-level feature extraction (MLFA) CNN-Transformer-based module. This module extracts features from the input data, considering both global dependencies and local information. Furthermore, we introduce a multi-level spatial attention (MLSA) block that functions as the bottleneck. It enhances the network's ability to handle complex structures and overlapping regions in feature maps. The MLSA block captures multiscale dependencies of tokens from the channel perspective and transmits them to the decoding path. A contextual feature stabilization block follows each transition to emulate lesion zones and facilitate segmentation guidelines at each phase. To address high-level semantic information, we incorporate a computationally efficient spatial channel attention block. This is followed by a stabilization block in the skip connections, ensuring global interaction and highlighting important semantic features from the encoder to the decoder. To evaluate the performance of our proposed MLFE-UNet, we selected common GI diseases, specifically bleeding and polyps. The dice coefficient scores obtained by MLFE-UNet on the MICCAI 2017 (Red lesion) and CVC-ClinicalDB data sets are 92.34% and 88.37%, respectively.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143120026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing Breast Cancer Detection: Integrating Few-Shot and Transfer Learning for Enhanced Accuracy and Efficiency","authors":"Nadeem Sarwar, Shaha Al-Otaibi, Asma Irshad","doi":"10.1002/ima.70033","DOIUrl":"https://doi.org/10.1002/ima.70033","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast cancer (BC) detection based on mammogram images is still an open issue, particularly when there is little annotated data. Combining few-shot learning (FSL) with transfer learning (TL) has been identified as a potential solution to overcome this problem due to its ability to learn from a few examples while producing robust features for classification. The objective of this study is to use and analyze FSL integrated with TL to enhance the classification accuracy and generalization ability in a limited dataset. The proposed approach integrates the FSL models (prototypical networks, matching networks, and relation networks) with the TL procedures. The models are trained using a small set of samples with annotation and can be assessed using various performance metrics. The models were trained and compared to the TL and the state-of-the-art methods regarding accuracy, precision, recall, F1-score, and area under the ROC curve (AUC). The models proved to be effective when integrated, and the relation networks model was the most accurate, with an accuracy of 95.6% and an AUC of 0.970. The models provided higher accuracy, recall, and F1-scores, especially in the case of discerning between normal, benign, and malignant cases, as compared to TL traditional techniques and the various recent state-of-the-art techniques. This integrated approach gives high efficiency, accuracy, and scalability to the whole BC detection process, and it has potential for further medical imaging domains. Future research will explore hyperparameter tuning and incorporating electronic health record systems to enhance diagnostic precision and individualized care.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Benchmarking YOLO Variants for Enhanced Blood Cell Detection","authors":"Pooja Mehta, Rohan Vaghela, Nensi Pansuriya, Jigar Sarda, Nirav Bhatt, Akash Kumar Bhoi, Parvathaneni Naga Srinivasu","doi":"10.1002/ima.70037","DOIUrl":"https://doi.org/10.1002/ima.70037","url":null,"abstract":"<div>\u0000 \u0000 <p>Blood cell detection provides a significant amount of information about a person's health, aiding in the diagnosis and monitoring of various medical conditions. Red blood cells (RBCs) carry oxygen, white blood cells (WBCs) play a role in immune defence, and platelets contribute to blood clotting. Changes in the composition of these cells can signal various physiological and pathological conditions, which makes accurate blood cell detection essential for effective medical diagnosis. In this study, we apply convolutional neural networks (CNNs), a subset of deep learning (DL) techniques, to automate blood cell detection. Specifically, we compare the performance of multiple variants of the You Only Look Once (YOLO) model, including YOLO v5, YOLO v7, YOLO v8 (in medium, small and nano configurations), YOLO v9c and YOLO v10 (in medium, small and nano configurations), for the task of detecting RBCs, WBCs and platelets. The results show that YOLO v5 achieved the highest mean average precision (mAP50) of 93.5%, with YOLO v10 variants also performing competitively. YOLO v10m achieved the highest precision for RBC detection at 85.1%, while YOLO v10n achieved 98.6% precision for WBC detection. YOLO v5 demonstrated the highest precision for platelets at 88.8%. Overall, YOLO models provided high accuracy and precision in detecting blood cells, making them suitable for medical image analysis. In conclusion, the study demonstrates that the YOLO model family, especially YOLO v5, holds significant potential for advancing automated blood cell detection. These findings can help improve diagnostic accuracy and contribute to more efficient clinical workflows.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An AG-RetinaNet for Embryonic Blastomeres Detection and Counting","authors":"Wenju Zhou, Ouafa Talha, Xiaofei Han, Qiang Liu, Yuan Xu, Zhenbo Zhang, Naitong Yuan","doi":"10.1002/ima.70034","DOIUrl":"https://doi.org/10.1002/ima.70034","url":null,"abstract":"<div>\u0000 \u0000 <p>Embryo morphology assessment is crucial for determining embryo viability in assisted reproductive technology. Traditional manual evaluation, while currently the primary method, is time-consuming, resource-intensive, and prone to inconsistencies due to the complex analysis of morphological parameters such as cell shape, size, and blastomere count. For rapid and accurate recognition and quantification of blastomeres in embryo images, Attention Gated-RetinaNet (AG-RetinaNet) model is proposed in this article. AG-RetinaNet combines an attention block between the backbone network and the Feature Pyramid Network to overcome the difficulties posed by overlapping blastomeres and morphological changes in embryo shape. The proposed model, trained on a dataset of human embryo images at different cell stages, uses ResNet50 and ResNet101 as backbones for performance comparison. Experimental results demonstrate its competitive performance against state-of-the-art detection models, achieving 95.8% average precision while balancing detection accuracy and computational efficiency. Specifically, the AG-RetinaNet achieves 83.08% precision, 91.13% sensitivity, 90.91% specificity, and an F1-score of 86.92% under optimized Intersection Over Union and confidence thresholds, effectively detecting and counting blastomeres across various grades. The comparison between these results and the manual annotations of embryologists confirms that our model has the potential to improve and streamline the workflow of embryologists in clinical practice.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zaifang Zhang, Shenjun Sheng, Shihui Zhu, Jian Jin
{"title":"Chronic Wound Assessment System Using an Improved UPerNet Model","authors":"Zaifang Zhang, Shenjun Sheng, Shihui Zhu, Jian Jin","doi":"10.1002/ima.70032","DOIUrl":"https://doi.org/10.1002/ima.70032","url":null,"abstract":"<div>\u0000 \u0000 <p>Wound assessment plays a crucial role in the healing process. Traditional methods for wound assessment, relying on manual judgment and recording, often yield inaccurate outcomes and require specialized medical equipment. This will result in increasing regular hospital visits and pose a series of challenges, such as delayed examinations, heightened infection risks, and increased costs. Thus, a real-time, portable, and convenient wound assessment system is essential for healing chronic wounds and reducing complications. This paper proposes an improved UPerNet network for wound tissue segmentation, which consists of three modules: Feature-aligned Pyramid Network (FaPN), Kernel update head, and Convolutional Block Attention Module (CBAM). The FaPN is employed to address feature misalignment. The Kernel update head is based on K-Net and dynamically updates convolutional kernel weights. The CBAM module is adopted to attend to crucial features. Ablation studies and comparative studies show that this method can achieve superior performance in wound tissue segmentation compared to other common segmentation models. Meanwhile, based on this method, we develop a mobile application. By using this system, patients can easily upload wound images for assessment, facilitating the convenient tracking of wound healing progress at home, thereby reducing medical expenses and hospital visitation times.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kashif Fareed, Anas Khan, Musaed Alhussein, Khursheed Aurangzeb, Aamir Shahzad, Mazhar Islam
{"title":"CBAM Attention Gate-Based Lightweight Deep Neural Network Model for Improved Retinal Vessel Segmentation","authors":"Kashif Fareed, Anas Khan, Musaed Alhussein, Khursheed Aurangzeb, Aamir Shahzad, Mazhar Islam","doi":"10.1002/ima.70031","DOIUrl":"https://doi.org/10.1002/ima.70031","url":null,"abstract":"<div>\u0000 \u0000 <p>Over the years, researchers have been using deep learning in different fields of science including disease diagnosis. Retinal vessel segmentation has seen significant advancements through deep learning techniques, resulting in high accuracy. Despite this progress, challenges remain in automating the segmentation process. One of the most pressing and often overlooked issues is computational complexity, which is critical for developing portable diagnostic systems. To address this, this study introduces a CBAM-Attention Gate-based U-Netmodel aimed at reducing computational complexity without sacrificing performance on evaluation metrics. The performance of the model was analyzed using four publicly available fundus image datasets: CHASE_DB1, DRIVE, STARE, and HRF, and it achieved sensitivity, specificity, accuracy, AUC, and MCC performances (0.7909, 0.9975, 0.9723, 0.9867, and 0.8011), (0.8217, 0.9816, 0.9674, 0.9849, and 0.9778), (0.8346, 0.9790, 0.9680, 0.9855, and 0.7810), and (0.8082, 0.9769, 0.9638, 0.9723, and 0.7575), respectively. Moreover, this model comprises of only 0.8 million parameters, which makes it one of the lightest available models used for retinal vessel segmentation. This lightweight yet efficient model is most suitable for use in low-end hardware devices. The attributes of significantly lower computational complexity along with improved evaluation metrics advocates for its deployment in portable embedded devices to be used for population-level screening programs.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NIR-II Fluorescence Image Translation via Latent Space Disentanglement","authors":"Xiaoming Yu, Jie Tian, Zhenhua Hu","doi":"10.1002/ima.70028","DOIUrl":"https://doi.org/10.1002/ima.70028","url":null,"abstract":"<div>\u0000 \u0000 <p>The second near-infrared window (NIR-II) fluorescence imaging is an excellent optical in vivo imaging method. Compared with NIR-IIa window (1000–1300 nm), NIR-IIb window (1500–1700 nm) imaging can significantly improve the imaging effect. However, due to the limitation that there are no molecular probes approved for NIR-IIb imaging in humans, we expect to achieve the translation of NIR-IIa images to NIR-IIb images through artificial intelligence. NIR-II fluorescence imaging is divided into macroscopic imaging of animal bodies and microscopic imaging of tissue and nerves. The two imaging scenarios are different. To realize the translation of two scene images at the same time, this paper designs a generative adversarial network model. The core idea is to disentangle the information in the encoded latent space into the information shared by the macroscopic and microscopic images and information specific to both to extract the high-quality feature maps for decoding. In addition, we improve the contrastive loss and use the attention-aware sampling strategy to select patches, which further maintains the source image content structure. The experiment results demonstrate the superiority and effectiveness of the proposed method.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143117887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}