{"title":"Prediction of Alzheimer's Disease Using Modified DNN with Optimal Feature Selection Based on Seagull Optimization.","authors":"Ashok Bhansali, Devulapalli Sudheer, Shrikant Tiwari, Venkata Subbaiah Desanamukula, Faiyaz Ahmad","doi":"10.1007/s10278-024-01262-z","DOIUrl":"https://doi.org/10.1007/s10278-024-01262-z","url":null,"abstract":"<p><p>Alzheimer's disease is a degenerative neurological condition resulting in brain cell death and brain tissue loss. Most importantly, memory-related brain cells are permanently harmed due to this condition. Alzheimer's disease diagnosis is a challenging task due to its high discriminative feature representation for classification using traditional machine learning (ML) methods. These challenges exist due to similar brain processes and pixel intensities. To overcome the above mentioned drawbacks, hybrid feature extraction techniques such as Gray Level Run Length Matrix (GLRLM), Gabor wavelet transform and Local Energy-based Shape Histogram (LESH) are used. In this designed model, Alzheimer's disease is predicted using brain MRI. At first, the collected magnetic resonance imaging (MRI) of the brain are resized and enhanced using the image resizing and BW-net technique. Features from these enhanced images are extracted using the GLRLM, Gabor wavelet transform and LESH techniques for shape, texture and edge of the brain MRI. Then, the extracted features are optimally selected using the SEAGULL optimization technique. These optimally selected features are trained using the modified DNN for predicting Alzheimer's disease. Performance metrics for proposed and existing models are studied and contrasted in order to assess the planned model. For the proposed model, 91%, 2%, 98% and 97% are performance metrics that were reached in aspects of precision, error, accuracy and recall. Thus, designed Alzheimer's disease prediction using modified DNN with optimal feature selection based on seagull optimization performs better and accurately predicts Alzheimer's disease.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
İlkay Yıldız Potter, Maria Virginia Velasquez-Hammerle, Ara Nazarian, Ashkan Vaziri
{"title":"Deep Learning-Based Body Composition Analysis for Cancer Patients Using Computed Tomographic Imaging.","authors":"İlkay Yıldız Potter, Maria Virginia Velasquez-Hammerle, Ara Nazarian, Ashkan Vaziri","doi":"10.1007/s10278-024-01373-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01373-7","url":null,"abstract":"<p><p>Malnutrition is a commonly observed side effect in cancer patients, with a 30-85% worldwide prevalence in this population. Existing malnutrition screening tools miss ~ 20% of at-risk patients at initial screening and do not capture the abnormal body composition phenotype. Meanwhile, the gold-standard clinical criteria to diagnose malnutrition use changes in body composition as key parameters, particularly body fat and skeletal muscle mass loss. Diagnostic imaging, such as computed tomography (CT), is the gold-standard in analyzing body composition and typically accessible to cancer patients as part of the standard of care. In this study, we developed a deep learning-based body composition analysis approach over a diverse dataset of 200 abdominal/pelvic CT scans from cancer patients. The proposed approach segments adipose tissue and skeletal muscle using Swin UNEt TRansformers (Swin UNETR) at the third lumbar vertebrae (L3) level and automatically localizes L3 before segmentation. The proposed approach involves the first transformer-based deep learning model for body composition analysis and heatmap regression-based vertebra localization in cancer patients. Swin UNETR attained 0.92 Dice score in adipose tissue and 0.87 Dice score in skeletal muscle segmentation, significantly outperforming convolutional benchmarks including the 2D U-Net by 2-12% Dice score (p-values < 0.033). Moreover, Swin UNETR predictions showed high agreement with ground-truth areas of skeletal muscle and adipose tissue by 0.7-0.93 R<sup>2</sup>, highlighting its potential for accurate body composition analysis. We have presented an accurate body composition analysis based on CT imaging, which can enable the early detection of malnutrition in cancer patients and support timely interventions.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accelerated T2W Imaging with Deep Learning Reconstruction in Staging Rectal Cancer: A Preliminary Study.","authors":"Lan Zhu, Bowen Shi, Bei Ding, Yihan Xia, Kangning Wang, Weiming Feng, Jiankun Dai, Tianyong Xu, Baisong Wang, Fei Yuan, Hailin Shen, Haipeng Dong, Huan Zhang","doi":"10.1007/s10278-024-01345-x","DOIUrl":"https://doi.org/10.1007/s10278-024-01345-x","url":null,"abstract":"<p><p>Deep learning reconstruction (DLR) has exhibited potential in saving scan time. There is limited research on the evaluation of accelerated acquisition with DLR in staging rectal cancers. Our first objective was to explore the best DLR level in saving time through phantom experiments. Resolution and number of excitations (NEX) adjusted for different scan time, image quality of conventionally reconstructed T2W images were measured and compared with images reconstructed with different DLR level. The second objective was to explore the feasibility of accelerated T2W imaging with DLR in image quality and diagnostic performance for rectal cancer patients. 52 patients were prospectively enrolled to undergo accelerated acquisition reconstructed with highly-denoised DLR (DLR_H<sub>40sec</sub>) and conventional reconstruction (ConR<sub>2min</sub>). The image quality and diagnostic performance were evaluated by observers with varying experience and compared between protocols using κ statistics and area under the receiver operating characteristic curve (AUC). The phantom experiments demonstrated that DLR_H could achieve superior signal-to-noise ratio (SNR), detail conspicuity, sharpness, and less distortion within the least scan time. The DLR_H<sub>40sec</sub> images exhibited higher sharpness and SNR than ConR<sub>2min</sub>. The agreements with pathological TN-stages were improved using DLR_H<sub>40sec</sub> images compared to ConR<sub>2min</sub> (T: 0.846vs. 0.771, 0.825vs. 0.700, and 0.697vs. 0.512; N: 0.527vs. 0.521, 0.421vs. 0.348 and 0.517vs. 0.363 for junior, intermediate, and senior observes, respectively). Comparable AUCs to identify T3-4 and N1-2 tumors were achieved using DLR_H<sub>40sec</sub> and ConR<sub>2min</sub> images (P > 0.05). Consequently, with 2/3-time reduction, DLR_H<sub>40sec</sub> images showed improved image quality and comparable TN-staging performance to conventional T2W imaging for rectal cancer patients.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ColonNeXt: Fully Convolutional Attention for Polyp Segmentation.","authors":"Dinh Cong Nguyen, Hoang Long Nguyen","doi":"10.1007/s10278-024-01342-0","DOIUrl":"https://doi.org/10.1007/s10278-024-01342-0","url":null,"abstract":"<p><p>This study introduces ColonNeXt, a novel fully convolutional attention-based model for polyp segmentation from colonoscopy images, aimed at the enhancing early detection of colorectal cancer. Utilizing a purely convolutional neural network (CNN), ColonNeXt integrates an encoder-decoder structure with a hierarchical multi-scale context-aware network (MSCAN) in the encoder and a convolutional block attention module (CBAM) in the decoder. The decoder further includes a proposed CNN-based feature attention mechanism for selective feature enhancement, ensuring precise segmentation. A new refinement module effectively improves boundary accuracy, addressing challenges such as variable polyp size, complex textures, and inconsistent illumination. Evaluations on standard datasets show that ColonNeXt achieves high accuracy and efficiency, significantly outperforming competing methods. These results confirm its robustness and precision, establishing ColonNeXt as a state-of-the-art model for polyp segmentation. The code is available at: https://github.com/long-nguyen12/colonnext-pytorch .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142809063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin Junck, Jordan D Perchik, Matthew Larrison, Adam Yates, Stephen Durham, Vamsi Penmetsa, Srini Tridandapani
{"title":"Integrating Global Health Initiatives into Routine Radiology Workflow in the USA.","authors":"Kevin Junck, Jordan D Perchik, Matthew Larrison, Adam Yates, Stephen Durham, Vamsi Penmetsa, Srini Tridandapani","doi":"10.1007/s10278-024-01356-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01356-8","url":null,"abstract":"<p><p>Radiologist shortages and lack of access to radiology services are common issues in low- and middle-income countries around the world. Teleradiology offers radiologists an opportunity to contribute to global health and support hospital systems in low-resource regions remotely. Challenges can occur when determining how to integrate the new remote worklist, how radiologists will view and report exams, and how a US host site can ensure safety and privacy across the different systems. In this manuscript, we describe our experience integrating exams performed at a remote hospital system in Ethiopia into a routine radiology worklist in the USA.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142804161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangrong Zhuang, Jinan Wang, Jianghe Kang, Ziying Lin
{"title":"Diagnosis of Acute Versus Chronic Thoracolumbar Vertebral Compression Fractures Using CT Radiomics Based on Machine Learning: a Preliminary Study.","authors":"Xiangrong Zhuang, Jinan Wang, Jianghe Kang, Ziying Lin","doi":"10.1007/s10278-024-01359-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01359-5","url":null,"abstract":"<p><p>The purpose of this study is to evaluate the performance of radiomic models in acute thoracolumbar vertebral compression fractures (VCFs) and their impact on radiologists. In this monocentre retrospective study, eligible for inclusion were adults who underwent emergent thoracic/lumbar CT between May 2022 and November 2023 in our hospital diagnosed with thoracolumbar VCFs. The lesions were randomly divided at a ratio of 7:3 into a training set and test set. For external validation, consecutive patients who underwent emergent thoracic/lumbar CT between January 2022 and April 2022 were included. MRI and previous imaging were used as reference standard. The vertebral body area was manually segmented. Logistic regression was used to construct a CT radiomic model and a combined model, including Relief-selected radiomic features and clinical information. The radiologists' diagnosis with and without the models was recorded. The performance was assessed using receiver operating characteristic curves (ROC), calibration curves (CC) and decision curve analysis (DCA). Of 235 VCFs in 147 patients (median age, 73 years, 66 male) included, the diagnosis of acute VCFs was confirmed in 126. The area under the ROC of the CT radiomics model and the combined model in the external validation set were 0.883 (95% CI 0.777, 0.998) and 0.875 (95% CI 0.768, 0.982), respectively. CC and DCA showed good clinical application of the models. The less experienced reader achieved a higher accuracy with the help of the models (p = 0.027). The radiomic models showed high accuracy for diagnosing acute VCFs and helped radiologists improve the accuracy of diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142804152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Convolutional Generative Adversarial Network for Improved Cardiac Image Classification in Heart Disease Diagnosis.","authors":"Gurusubramani S, Latha B","doi":"10.1007/s10278-024-01343-z","DOIUrl":"https://doi.org/10.1007/s10278-024-01343-z","url":null,"abstract":"<p><p>Heart disease is a fatal disease that causes significant mortality rates worldwide. The accurate and early detection of heart diseases is the most challenging task to save valuable lives. To avoid these issues, the Deep Convolutional Generative Adversarial Network (DCGAN) model is proposed that generates synthetic cardiac images. Here, two types of heart disease datasets such as the Sunnybrook Cardiac Dataset (SCD) and the Automated Cardiac Diagnosis Challenge (ACDC) dataset are selected to choose real cardiac images for implementation. The quality and consistency of the cardiac images are enhanced by preprocessed real cardiac images. In the DCGAN model, the generator is used for converting real cardiac images into synthetic images and the discriminator is responsible for differentiating real and synthetic cardiac images by binary classification decisions. To enhance the model's robustness and generalization ability, diverse augmentation techniques are implemented. The VGG16 model is applied in this paper for the image classification task and fine-tuned its parameters to optimize model convergence. For experimental validation, some of the significance metrics such as accuracy, precision, diagnostic time, peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), false positive rate (FPR), false negative rate (FNR), and mean squared error (MSE) are utilized. The extensive experimental evaluations are carried out based on this metrics and attained a performance rate of the proposed method as 98.83%, 1.17%, 3.2%, 41.78, 4.52, 0.932, and 1.6 s from accuracy, FPR, FNR, PSNR, MSE, SSIM, and diagnostic time, respectively. The experimental evaluation results demonstrate that the proposed heart disease diagnosis model attains superior performances than state-of-the-art methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142804058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusion Learning from Non-contrast CT Scans for the Detection of Hemorrhagic Transformation in Stroke Patients.","authors":"Chung-Ming Lo, Peng-Hsiang Hung","doi":"10.1007/s10278-024-01350-0","DOIUrl":"https://doi.org/10.1007/s10278-024-01350-0","url":null,"abstract":"<p><p>Hemorrhagic transformation (HT) is a potentially catastrophic complication after acute ischemic stroke. Prevention of HT risk is crucial because it worsens prognosis and increases mortality. This study aimed at developing and validating a computer-aided diagnosis system using pretreatment non-contrast computed tomography (CT) scans for HT prediction in stroke patients undergoing revascularization. This retrospective study included all acute ischemic stroke patients with non-contrast CT before reperfusion therapy who also underwent follow-up MRI from January 2018 to December 2022. Among the 188 evaluated patients, any degree of HT at follow-up imaging was observed in 103 patients. HT diagnosis via MRI was defined as the reference standard for neuroradiologists. Using a database of 2076 serial non-contrast CT images of the brain, pretrained deep learning architectures such as convolutional neural networks and vision transformers (ViTs) were used for feature extraction. The performance of the predictive HT risk model was evaluated via tenfold cross-validation in machine learning classifiers. The accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were evaluated. Using an individual deep learning architecture, DenseNet201 features achieved the highest accuracy of 87% and an AUC of 0.8863 in the classifier of the subspace ensemble k-nearest neighbor. By combining the DenseNet201 and ViT features, the accuracy and AUC can be improved to 88% and 0.8987, respectively, which are significantly better than those of using ViT alone. Detecting HT in stroke patients is a meaningful but challenging issue. On the basis of the model approach, HT diagnosis would be more automatic, efficient, and consistent, which would be helpful in clinic use.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142804158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cone Beam Computed Tomography Image-Quality Improvement Using \"One-Shot\" Super-resolution.","authors":"Takumasa Tsuji, Soichiro Yoshida, Mitsuki Hommyo, Asuka Oyama, Shinobu Kumagai, Kenshiro Shiraishi, Jun'ichi Kotoku","doi":"10.1007/s10278-024-01346-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01346-w","url":null,"abstract":"<p><p>Cone beam computed tomography (CBCT) images are convenient representations for obtaining information about patients' internal organs, but their lower image quality than those of treatment planning CT images constitutes an important shortcoming. Several proposed CBCT image-quality improvement methods based on deep learning require large amounts of training data. Our newly developed model using a super-resolution method, \"one-shot\" super-resolution (OSSR) based on the \"zero-shot\" super-resolution method, requires only small amounts of training data to improve CBCT image quality using only the target CBCT image and the paired treatment planning CT image. For this study, pelvic CBCT images and treatment planning CT images of 30 prostate cancer patients were used. We calculated the root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) to evaluate image-quality improvement and normalized mutual information (NMI) as a quantitative evaluation of positional accuracy. Our proposed method can improve CBCT image quality without requiring large amounts of training data. After applying our proposed method, the resulting RMSE, PSNR, SSIM, and NMI between the CBCT images and the treatment planning CT images were as much as 0.86, 1.05, 1.03, and 1.31 times better than those obtained without using our proposed method. By comparison, CycleGAN exhibited values of 0.91, 1.03, 1.02, and 1.16. The proposed method achieved performance equivalent to that of CycleGAN, which requires images from approximately 30 patients for training. Findings demonstrated improvement of CBCT image quality using only the target CBCT images and the paired treatment planning CT images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142782336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neng Wang, Wenjie Xu, Huogen Wang, Sikai Wu, Jian Wang, Weiqun Ao, Cui Zhang, Yun Zhu, Zongyu Xie, Guoqun Mao
{"title":"Machine Learning Based on Digital Mammography to Reduce the Need for Invasive Biopsies of Benign Calcifications Classified in BI-RADS Category 4.","authors":"Neng Wang, Wenjie Xu, Huogen Wang, Sikai Wu, Jian Wang, Weiqun Ao, Cui Zhang, Yun Zhu, Zongyu Xie, Guoqun Mao","doi":"10.1007/s10278-024-01347-9","DOIUrl":"https://doi.org/10.1007/s10278-024-01347-9","url":null,"abstract":"<p><p>This study aims to develop a machine learning model applied on digital mammograms to reduce unnecessary invasive biopsies for suspicious calcifications classified as BI-RADS category 4. This study retrospectively analyzed data from 372 female patients with pathologically confirmed BI-RADS category 4 mammographic calcifications. Patients from the First Affiliated Hospital of Bengbu Medical College (n = 275) were divided chronologically into a training and internal validation set. An external validation set (n = 97) was recruited from Tongde Hospital of Zhejiang Province. We first segmented calcifications using nnUnet, and then built a radiomics model and deep learning model, respectively. Finally, we used an information fusion method to combine the results of the two models to obtain the final prediction. The different models, including the radiomics model, the deep learning model, and the fusion model, were evaluated on the validation set from two hospitals. In the external validation set, the radiomics model yielded an AUC of 0.883 (95% CI, 0.802-0.939), a sensitivity of 0.921, and a specificity of 0.735, and the deep learning model yielded an AUC of 0.873 (95% CI, 0.789-0.932), a sensitivity of 0.905, and a specificity of 0.853. The fusion model achieved an AUC of 0.947 (95% CI, 0.882-0.982), sensitivity of 0.825, and specificity of 0.941 in the external validation set. The fusion model has the potential to reduce the need for invasive biopsies of benign mammographic calcifications classified as BI-RADS category 4, without sacrificing the diagnostic accuracy for malignant cases.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142782337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}