Tapabrat Thakuria, Lipi B Mahanta, Sanjib Kumar Khataniar, Rahul Dev Goswami, Nevica Baruah, Trailokya Bharali
{"title":"Smartphone-Based Oral Lesion Image Segmentation Using Deep Learning.","authors":"Tapabrat Thakuria, Lipi B Mahanta, Sanjib Kumar Khataniar, Rahul Dev Goswami, Nevica Baruah, Trailokya Bharali","doi":"10.1007/s10278-025-01455-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01455-0","url":null,"abstract":"<p><p>Early detection of oral diseases, including and excluding cancer, is essential for improved outcomes. Segmentation of these lesions from the background is a crucial step in diagnosis, aiding clinicians in isolating affected areas and enhancing the accuracy of deep learning (DL) models. This study aims to develop a DL-based solution for segmenting oral lesions using smartphone-captured images. We designed a novel UNet-based model, OralSegNet, incorporating EfficientNetV2L as the encoder, along with Atrous Spatial Pyramid Pooling (ASPP) and residual blocks to enhance segmentation accuracy. The dataset consisted of 538 raw images with an average resolution of 1394 × 1524 pixels, along with corresponding annotated images of oral lesions. These images were pre-processed and resized to 256 × 256 pixels, and data augmentation techniques were applied to enhance the model's robustness. Our model achieved Dice coefficients of 0.9530 and 0.8518 and IoU scores of 0.9104 and 0.7550 in the validation and test phases, respectively, outperforming traditional and state-of-the-art models. The efficient architecture achieves the lowest FLOPS (34.30 GFLOPs) despite being the most parameter-heavy model (104.46 million). Given the widespread availability of smartphones, OralSegNet offers a cost-effective, non-invasive CNN model for clinicians, making early diagnosis accessible even in rural areas.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SSW-YOLO: Enhanced Blood Cell Detection with Improved Feature Extraction and Multi-scale Attention.","authors":"Hai Sun, Xiaorong Wan, Shouguo Tang, Yingna Li","doi":"10.1007/s10278-025-01460-3","DOIUrl":"https://doi.org/10.1007/s10278-025-01460-3","url":null,"abstract":"<p><p>The integration of deep learning in medical image analysis has driven significant progress, especially in the domain of automatic blood cell detection. While the YOLO series of algorithms have become widely adopted as a real-time object detection approach, there is a need for further refinement for the detection of small targets like blood cells and in low-resolution images. In this context, we introduce SSW-YOLO, a novel algorithm designed to tackle these challenges. The primary innovations of SSW-YOLO include the use of a spatial-to-depth convolution (SPD-Conv) layer to enhance feature extraction, the adoption of a Swin Transformer for multi-scale attention mechanisms, the simplification of the c2f module to reduce model complexity, and the utilization of Wasserstein distance loss (WDLoss) function to improve localization accuracy. With these enhancements, SSW-YOLO significantly improves the accuracy and efficiency of blood cell detection, reduces human error, and consequently accelerates the diagnosis of blood disorders while enhancing the precision of clinical diagnoses. Empirical analysis on the BCCD blood cell dataset indicates that SSW-YOLO achieves a mean average precision (mAP) of 94.0%, demonstrating superior performance compared to existing methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riel Castro-Zunti, Eun Hae Park, Hae Ni Park, Younhee Choi, Gong Yong Jin, Hee Suk Chae, Seok-Bum Ko
{"title":"Diagnosing Ankylosing Spondylitis via Architecture-Modified ResNet and Combined Conventional Magnetic Resonance Imagery.","authors":"Riel Castro-Zunti, Eun Hae Park, Hae Ni Park, Younhee Choi, Gong Yong Jin, Hee Suk Chae, Seok-Bum Ko","doi":"10.1007/s10278-025-01427-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01427-4","url":null,"abstract":"<p><p>Ankylosing spondylitis (AS), a lifelong inflammatory disease, leads to fusion of vertebrae and sacroiliac joints (SIJs) if undiagnosed. Conventional magnetic resonance imaging (MRI), e.g., T1w/T2w, is the diagnostic modality of choice for AS. However, computed tomography (CT)-a second-line modality-offers higher specificity because CT differentiates AS-relevant bony erosions/lesions better than MRI. We wished to ascertain whether MRI could be used to train/optimize convolutional neural networks (CNNs) for AS classification and which type of conventional MRI may dominate. We extracted 534 AS and 606 control SIJs from 56 patients with three simultaneously captured conventional MRI sequences. For classification, we compared modified/optimized variants of ResNet50, InceptionV3, and VGG16. CNNs were fine-tuned using 6-fold cross-validation and optimized architecturally and by learning rate. To automate SIJ extraction, we also developed a YOLOv5-based SIJ detector. Models trained on images that were the RGB combination of the MRI sequences significantly outperformed models trained on any one sequence ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). The best architecture, located via architectural decomposition, was the first 9 blocks of ResNet50. The reduced-parameters model, which met or exceeded the full architecture's performance in 83% less parameters, achieved a cross-validation test set accuracy, sensitivity, specificity, and ROC AUC of 95.26%, 96.25%, 94.39%, and 99.1%. Our SIJ detector achieved 96.88-99.88% mAP@0.5. Deep learning models successfully diagnose AS from control SIJs. Models trained on combined conventional MRI achieve high sensitivity and specificity, mitigating the need for radioactive CT.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chetana Krishnan, Ezinwanne Onuoha, Alex Hung, Kyung Hyun Sung, Harrison Kim
{"title":"Multi-attention Mechanism for Enhanced Pseudo-3D Prostate Zonal Segmentation.","authors":"Chetana Krishnan, Ezinwanne Onuoha, Alex Hung, Kyung Hyun Sung, Harrison Kim","doi":"10.1007/s10278-025-01401-0","DOIUrl":"10.1007/s10278-025-01401-0","url":null,"abstract":"<p><p>This study presents a novel pseudo-3D Global-Local Channel Spatial Attention (GLCSA) mechanism designed to enhance prostate zonal segmentation in high-resolution T2-weighted MRI images. GLCSA captures complex, multi-dimensional features while maintaining computational efficiency by integrating global and local attention in channel and spatial domains, complemented by a slice interaction module simulating 3D processing. Applied across various U-Net architectures, GLCSA was evaluated on two datasets: a proprietary set of 44 patients and the public ProstateX dataset of 204 patients. Performance, measured using the Dice Similarity Coefficient (DSC) and Mean Surface Distance (MSD) metrics, demonstrated significant improvements in segmentation accuracy for both the transition zone (TZ) and peripheral zone (PZ), with minimal parameter increase (1.27%). GLCSA achieved DSC increases of 0.74% and 11.75% for TZ and PZ, respectively, in the proprietary dataset. In the ProstateX dataset, improvements were even more pronounced, with DSC increases of 7.34% for TZ and 24.80% for PZ. Comparative analysis showed GLCSA-UNet performing competitively against other 2D, 2.5D, and 3D models, with DSC values of 0.85 (TZ) and 0.65 (PZ) on the proprietary dataset and 0.80 (TZ) and 0.76 (PZ) on the ProstateX dataset. Similarly, MSD values were 1.14 (TZ) and 1.21 (PZ) on the proprietary dataset and 1.48 (TZ) and 0.98 (PZ) on the ProstateX dataset. Ablation studies highlighted the effectiveness of combining channel and spatial attention and the advantages of global embedding over patch-based methods. In conclusion, GLCSA offers a robust balance between the detailed feature capture of 3D models and the efficiency of 2D models, presenting a promising tool for improving prostate MRI image segmentation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143532118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shereen Fouad, Muhammad Usman, Ra'eesa Kabir, Arvind Rajasekaran, John Morlese, Pankaj Nagori, Bahadar Bhatia
{"title":"Explained Deep Learning Framework for COVID-19 Detection in Volumetric CT Images Aligned with the British Society of Thoracic Imaging Reporting Guidance: A Pilot Study.","authors":"Shereen Fouad, Muhammad Usman, Ra'eesa Kabir, Arvind Rajasekaran, John Morlese, Pankaj Nagori, Bahadar Bhatia","doi":"10.1007/s10278-025-01444-3","DOIUrl":"https://doi.org/10.1007/s10278-025-01444-3","url":null,"abstract":"<p><p>In March 2020, the British Society of Thoracic Imaging (BSTI) introduced a reporting guidance for COVID-19 detection to streamline standardised reporting and enhance agreement between radiologists. However, most current DL methods do not conform to this guidance. This study introduces a multi-class deep learning (DL) model to identify BSTI COVID-19 categories within CT volumes, classified as 'Classic', 'Probable', 'Indeterminate', or 'Non-COVID'. A total of 56 CT pseudoanonymised images were collected from patients with suspected COVID-19 and annotated by an experienced chest subspecialty radiologist following the BSTI guidance. We evaluated the performance of multiple DL-based models, including three-dimensional (3D) ResNet architectures, pre-trained on the Kinetics-700 video dataset. For better interpretability of the results, our approach incorporates a post-hoc visual explainability feature to highlight the areas of the image most indicative of the COVID-19 category. Our four-class classification DL framework achieves an overall accuracy of 75%. However, the model struggled to detect the 'Indeterminate' COVID-19 group, whose removal significantly improved the model's accuracy to 90%. The proposed explainable multi-classification DL model yields accurate detection of 'Classic', 'Probable', and 'Non-COVID' categories with poor detection ability for 'Indeterminate' COVID-19 cases. These findings are consistent with clinical studies that aimed at validating the BSTI reporting manually amongst consultant radiologists.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nathan A Bumbarger, Stephanie Y Jo, Brandon J Cofield, Olga Haan, Devina Chatterjee
{"title":"DEXA Result Automation into Radiology Reports: An Implementation Guide for Radiologists, PACS Administrators, and Technicians.","authors":"Nathan A Bumbarger, Stephanie Y Jo, Brandon J Cofield, Olga Haan, Devina Chatterjee","doi":"10.1007/s10278-025-01451-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01451-4","url":null,"abstract":"<p><p>Osteoporosis is prevalent among older adults, significantly increasing fracture risk, with hip fractures often leading to reduced survival. Dual-energy X-ray absorptiometry (DEXA) is the gold standard for diagnosing osteoporosis. However, manual transcription of DEXA results into radiology reports is error-prone and time-consuming. This study explores the implementation of a vendor-neutral structured report (SR) system to automate data import from DEXA scans, aiming to improve efficiency and accuracy. The study involved the use of Nuance PowerScribe 360 and Hyland's PACSgear ModLink for automating DEXA data entry into radiology reports. ModLink translates DEXA results into structured data, which is mapped to customized report templates. Radiologists compared the templates with and without the SR data being sent, analyzing time differences between the two templates using pre- and post-implementation measurements. The implementation of the SR system led to a significant reduction in report generation time, with radiologists achieving up to a fivefold decrease in dictation time. The slowest reader saw a 2.5-fold improvement, and the fastest reader showed a fivefold improvement (p < 0.01). No errors in data mapping were observed, indicating reliable integration of the SR system. In light of the current radiologist shortage, the SR system demonstrated notable improvements in workflow efficiency without adding to technologist workload. The time savings and reduced transcription errors offer radiology practices a valuable tool to enhance productivity and patient care. Automating the DEXA data transcription process using a structured report system substantially improves efficiency, minimizes errors, and has minimal implementation burden, representing a promising intervention for radiology practices facing increasing demand.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Digital Pathology Displays Under Pressure: Benchmarking Performance Across Market Grades.","authors":"Stefano Marletta, Alessandro Caputo, Gabriele Guidi, Liron Pantanowitz, Fabio Pagni, Iacopo Bavieri, Vincenzo L'Imperio, Matteo Brunelli, Angelo Paolo Dei Tos, Albino Eccher","doi":"10.1007/s10278-025-01452-3","DOIUrl":"https://doi.org/10.1007/s10278-025-01452-3","url":null,"abstract":"<p><p>Digital pathology (DP) has transformed the practice of pathology by digitizing pathology glass slides, thereby enhancing diagnostic capabilities. In contrast to radiology, studies comparing the efficiency of DP monitors are limited. This work used a stress test that simulated DP sign-out in practice to evaluate the performance of medical-grade (MG) and consumer off-the-shelf (COTS) displays. Four displays, including three MG and one COTS, were assessed for luminance, contrast ratio, accuracy, and image uniformity. Key metrics, such as luminance uniformity and maximum brightness, were evaluated during a 1-month period that simulated use to reflect an 8-h work day. MG displays outperformed COTS in critical parameters, even though consumer displays were satisfactory for diagnostic purposes. Image uniformity exhibited the most significant variations, with deterioration noted over 2.5% for all displays during the test period. This study compared different types of displays for DP and highlights the importance of regular calibration for maintaining display performance when using DP. Further research is recommended to define validation protocols, including the impact of display aging on DP accuracy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Phillips, Constantine Zakkaroff, Keren Dittmer, Nicholas Robilliard, Kenzie Baer, Anthony Butler
{"title":"A Proof-of-Concept Solution for Co-locating 2D Histology Images in 3D for Histology-to-CT and MR Image Registration: Closing the Loop for Bone Sarcoma Treatment Planning.","authors":"Robert Phillips, Constantine Zakkaroff, Keren Dittmer, Nicholas Robilliard, Kenzie Baer, Anthony Butler","doi":"10.1007/s10278-025-01426-5","DOIUrl":"https://doi.org/10.1007/s10278-025-01426-5","url":null,"abstract":"<p><p>This work presents a proof-of-concept solution designed to facilitate more accurate radiographic feature characterisation in pre-surgical CT/MR volumes. The solution involves 3D co-location of 2D digital histology slides within ex-vivo, tumour tissue CT volumes. Initially, laboratory dissection measurements seed the placement of histology slices in corresponding CT volumes, followed by in-plane point-based registration of bone in histology images to the bone in CT. Validation using six bisected canine humerus ex-vivo CT datasets indicated a plane misalignment of 0.19 ± 1.8 mm. User input sensitivity was assessed at 0.08 ± 0.2 mm for plane translation and 0-1.6° deviation. These results show a similar magnitude of error to related prostate histology co-location work. Although demonstrated with a femoral canine sarcoma tumour, this solution can be generalised to various orthopaedic geometries and sites. It supports high-fidelity histology image co-location to improve understanding of tissue characterisation accuracy in clinical radiology. This solution requires only minimal adjustment to routine workflows. By integrating histology insights earlier in the presentation-diagnosis-planning-surgery-recovery loop, this solution guides data co-location to support the continued evaluation of safe pre-surgical margins.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joo Chan Choi, Young Jae Kim, Kwang Gi Kim, Eun Young Kim
{"title":"An Analysis of the Efficacy of Deep Learning-Based Pectoralis Muscle Segmentation in Chest CT for Sarcopenia Diagnosis.","authors":"Joo Chan Choi, Young Jae Kim, Kwang Gi Kim, Eun Young Kim","doi":"10.1007/s10278-025-01443-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01443-4","url":null,"abstract":"<p><p>Sarcopenia is the loss of skeletal muscle function and mass and is a poor prognostic factor. This condition is typically diagnosed by measuring skeletal muscle mass at the L3 level. Chest computed tomography (CT) scans do not include the L3 level. We aimed to determine if these scans can be used to diagnose sarcopenia and thus guide patient management and treatment decisions. This study compared the ResNet-UNet, Recurrent Residual UNet, and UNet3 + models for segmenting and measuring the pectoralis muscle area in chest CT images. A total of 4932 chest CT images were collected from 1644 patients, and additional abdominal CT data were collected from 294 patients. The performance of the models was evaluated using the dice similarity coefficient (DSC), accuracy, sensitivity, and specificity. Furthermore, the correlation between the segmented pectoralis and L3 muscle areas was compared using linear regression analysis. All three models demonstrated a high segmentation performance, with the UNet3 + model achieving the best performance (DSC 0.95 ± 0.03). Pearson correlation coefficient between the pectoralis and L3 muscle areas showed a significant positive correlation (r = 0.65). The correlation coefficient between the transformed pectoralis and L3 muscle areas showed a stronger positive correlation in both univariate analysis using only muscle area (r = 0.74) and multivariate analysis considering sex, weight, age, and muscle area (r = 0.83). Segmentation of the pectoralis muscle area using artificial intelligence (AI) on chest CT was highly accurate, and the measured values showed a strong correlation with the L3 muscle area. Chest CT using AI technology could play a significant role in the diagnosis of sarcopenia.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ultrasound Thyroid Nodule Segmentation Algorithm Based on DeepLabV3+ with EfficientNet.","authors":"Nan Xiao, Demin Kong, Junfeng Wang","doi":"10.1007/s10278-025-01436-3","DOIUrl":"https://doi.org/10.1007/s10278-025-01436-3","url":null,"abstract":"<p><p>Ultrasound is widely used to monitor and diagnose thyroid nodules, but accurately segmenting these nodules in ultrasound images remains a challenge due to the presence of noise and artifacts, which often blur nodule boundaries. While several deep learning algorithms have been developed for this task, their performance is frequently suboptimal. In this study, we introduce the use of EfficientNet-B7 as the backbone for the DeepLabV3+ architecture in thyroid nodule segmentation, marking its first application in this area. We evaluated the proposed method using a dataset from the First Affiliated Hospital of Zhengzhou University, along with two public datasets. The results demonstrate high performance, with a pixel accuracy (PA) of 97.67%, a Dice similarity coefficient of 0.8839, and an Intersection over Union (IoU) of 79.69%. These outcomes outperform most traditional segmentation networks.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143506760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}